Vendr

Goal - Understand & evaluate user insights for a mobile application.

Understanding the Product - 

Vendr is a cross-platform web and mobile application designed to be a cross between a dating application, akin to Tinder, and an application for selling second-hand items, akin to eBay. Users are able to see items for sale within an adjustable mile radius of their current location and swipe right on that item if they are interested, and left otherwise.  If users are interested in an item they can message the seller and inquire further about such things as price, availability, or shipping. Besides buying, users are also able to quickly and easily set up an account meant for selling items as well.

Deliverables

  • Heuristic Evaluation Report
  • Test Plan
  • Pilot Test Summary
  • Usability Evaluation Report

Skills

  • Heuristic Evaluation
  • Usability Evaluation
  • Quantitative Analysis
  • Qualitative Research

Duration

8 Weeks (Spring 2021)

Team

Pranav Shinde
Shubhankar Singh
Effie Wang
Brandon Palonis
Rezylle Milallos

Client Kickoff Meeting

The kickoff meeting was aimed at understanding the context of the application and setting expectations regarding the desired outcomes of the workflows that have to be evaluated. We further discussed the outline of the evaluation plan with the client to help them understand the process we will be undertaking to perform the usability evaluation.

Underlining the deliverables to set expectations and accountability for the outcomes was the next step during the kickoff meeting.

Heuristic Evaluation

Heuristic Evaluation allowed us to locate and focus on issues before we spoke to the users. 

After identifying the common vision of the application as a team, every team member individually conducted a Heuristic Evaluation of the selected application to identify usability issues. We mapped the identified usability issues under the identified usability principles. We compiled our individual evaluations to create a report and generate usability issues under high, medium, or low priority.

Test Plan

The test plan helped us define the objective of the entire project, underline the methodologies used to conduct usability evaluation, and as a mode of communication between the stakeholders and the evaluators for the roadmap of the evaluation plan.

The test plan helped us and the stakeholders answer the questions listed below - 

  • Why do we and the stakeholders want to conduct the test? (Goals and Objectives)
  • What are we going to test? (Aspects of the system)
  • Who are we going to test with? (Participants)
  • How are we going to conduct it? (Test Design)
  • Where are we going to perform it?
  • When are we going to perform it?
  • What data is supposed to be collected?
  • What will be our final deliverables?

Test Materials

One of the more intensive activities that we undertook to conduct the usability evaluation is developing the test materials that were to be used to communicate with the participants, collect the data, and satisfy legal requirements. It was important to develop all required test materials well in advance of the time we needed them. Apart from the obvious benefit of not having to scurry around at the last minute, developing materials early on helped us to explicitly structure and organize the test.

The test materials we established are listed as below -  

  • Orientation Script
  • Informed Consent
  • Team Roles
  • Background Questionnaire
  • Pre-Test Questionnaire
  • Think-Aloud Practice
  • Task Scenarios
  • Post-Task Questionnaire
  • Post-Test Questionnaire
  • Debrief

Pilot Testing

We conducted two pilot studies in order to have two potential revisions to the test materials; both participants were classified as buyers (users who would use the application only to buy). The pilot study reviewed the screener, background questionnaires, and the entire evaluation process (e.g., the introduction of the study, informed consent, Vendr’s non-disclosure agreement, all 10 task scenarios, and follow-up debriefing questions).

Reflection - Pilot Testing No. 1

After we concluded our first pilot test, before conducting our 2nd pilot test we identified a few quick hits which needed to be worked on to ensure a smoother workflow for the final usability evaluation. The quick hits are as described as below -

  • Task scenarios that represent what is currently clickable on Vendr
  • Detailed orientation script with a small disclaimer on how the presented application is only a prototype. The evaluator is encouraged to think aloud whenever s/he felt stuck in certain cases
  • The checklist was updated such that post-task questionnaires and additional follow up questions based on the evaluator’s feedback were asked after each scenario
  • The debriefing section was modified where the moderator instead asked specific questions about certain screens of the application where she noticed the evaluator had some trouble. Questions about additional screens that were not included in the task scenarios were also asked (i.e., on this screen, what do you think this text does?)
  • A think-aloud practice was added in case future participants do not have any experience with the same practice.

Reflection - Pilot Testing No. 2

The 2nd pilot testing helped us identify a few points to consider before we moved on to finalizing the plan and structure for our usability evaluation listed as below - 

  • In the test materials kit given to participants, scenario names must not reflect the actual name of the task. Titles were changed to Task #1, Task #2, and so on.
  • Record only the window at hand and separately record the Zoom meeting window so that a bigger view of the participant’s expressions can be added to the highlight reel.
  • In the post-task questionnaires, remove the follow-up question about the efficiency of the application.
  • Update the background questionnaire to remove any leading questions and move other questions to the post-study questionnaire instead.
  • Because all 10 tasks were easily completed within the allotted time, we will proceed with having both buyers and sellers evaluate all tasks.

Data Collection

Quantitative Data

  • Participant demographic and background information
  • Number of clicks to complete a task
  • Number of incorrect user interface selections and failed tasks
  • Number of other errors encountered during the evaluation
  • Likert ratings on the flexibility, look-and-feel, expectations, ease of navigation, and comparison between other, similar applications
  • Likert ratings on the app performance from either the buyer or seller’s perspective

Qualitative Data

  • User comments and questions during test sessions, such as those regarding how easy they find the application to use, the design of the UI, intuitiveness, and expectations.
  • Open-ended questions from the questionnaire.
  • Thoughts, comments, and suggestions from the debriefing session.

Usability Evaluation

Test Procedure

Each session had 1 moderator and 1 note-taker. Participants were given a short introduction on the application and be asked to complete a non-disclosure agreement (NDA) as well as a background questionnaire before proceeding to the formal evaluation. We also asked for their consent to record the session. After the participant went through different scenarios presented by the moderator, s/he was asked to complete a post-study questionnaire and a debriefing session to share additional thoughts about the system. Information regarding their preferred method of compensation was also be gathered at the end of the evaluation.

Location & Setting

We conducted the usability testing completely online through Zoom. Participant videos were recorded using Zoom as well. Vendr (the application) was set up on the moderator’s device before the evaluation; participants were able to interact with the app through Zoom’s remote desktop control. The moderator used a screen-recording software, Open Broadcaster Software (OBS) to record audio and on-screen activity (i.e., user actions and mouse clicks) throughout the evaluation. Participants were free to join the session from any physical location but were encouraged to be in distraction and noise-free environments.

Understanding the Roles & Responsibilities

In order to avoid overwhelming the participants with multiple users in the Zoom call, we have decided to rotate team members between the following roles:

Moderator
The moderator was in the zoom meeting and their responsibilities were to guide the participants during the whole evaluation. S/he greeted the participant and explained the study overall and any sub-tasks. S/he also went over the consent form and nondisclosure agreement with the participant conducted all questionnaires and asked unscripted questions to better understand a participant’s comments or actions during or after a task.  If necessary, the moderator intervened during a task in order to help the participant. Finally, the moderator debriefed and thank them for their participation.

Observer/Note Taker
The observer noted the participants' comments and actions during the study. In addition, they helped to track the time for each task and the whole session. The observer was also be in charge of recording the Zoom session.

Task Scenarios


Test Results

Currently under NDA. To be available at the earliest.

Let's get in touch

Email - ps9581@rit.edu
Contact - (585) 537 - 8619

Made with some ❤︎ by Pranav Shinde. Thanks to Will Truran for the Dev Help.