For my Engineering Psychology (PSYC 161) class, we were tasked with doing a usability analysis of a system of our choice. We chose to analyze two competing meeting scheduling web services, WhenIsGood.net and Doodle.com. We produced an extensive evaluation of the two interfaces and exceeded the assignment's requirements by offering suggested designs that would solve each site's problems.
The evaluation was expected to include analysis by our group, as well as testing on naive users. The professor offered little guidance in terms of usability test procedures, so we relied on previous experience when designing the scenarios and methods for the observation sessions.
Instead of choosing just one interface, we chose to analyze and compare two similar applications. Each is a service meant to help groups find meeting times that work for all participants.
As an added challenge, only two members (myself and a fellow cognitive science student) of our group of five had experience with usability; the others were computer science students used to coding. Beyond completing the usability evaluation required for the project, we also helped our groupmates understand the importance of usability and its practice, while they offered fresh perspectives from people not familiar with usability.
We first conducted a heuristic evaluation of each site's usability. Each team member independently experimented with the two websites -- exploring not just the key path scenario but variant paths as well -- and took notes with regards to Nielsenís heuristics. The notes were then consolidated to provide a general consensus of the usability from a high-level design perspective.
We chose to test users with varying degrees of technological aptitude and naivete, since meeting scheduling websites should be accessible to users with any degree of technological knowledge. We gave the users four tasks to complete for each site: create an event, invite participants to enter availability for the event, enter availability as a participant, and view the results of the survey. We asked our users to think aloud and describe their thought process for each decision during the tasks. The users were free to create any scenario for their fictional event, but were asked to use the same scenario for both sites. As they completed the tasks, we took note of their verbalized thought processes, silent mistakes that went unsaid or unnoticed by the user, any difficulties the user encountered, as well as any indications of a positive experience. We also did our best to take objective measurements, such as the amount of time they took to complete the task. At a higher level, we also asked users what they were trying to accomplish or were searching for, and to explain their reasoning.
We produced a sixteen-page document describing our methods and findings, as well as our design suggestions for improving the usability of meeting-scheduling websites (which was beyond the required scope of the assignment). We also prepared a comprehensive presentation that was presented in front of our class and professor.