IS 582 1

Project 2

Project 2: Usability Assessment

Dr. Dania Bilal

Spring 2008

Due May 5 at noon

Objective: To use an evaluation method and assess the usability of Hodges’ library Aleph OPAC interface

Evaluation Methods: There are a variety of usability assessment methods to use for evaluating the interfaces of a system. For this assignment, choose one of the following two methods:

1. Cognitive walkthrough

2. Heuristic testing/inspection

Prerequisite: Before you begin this project, you must READ about the evaluation method you select for this project.

1. Cognitive Walkthrough Method

Scenario: You are the experts and the challenge is to put yourself into the users' place as you walk through the system on each assigned task.

Team Work

Work in a team of three or more to perform the walkthrough for one of the systems listed above. A team of three will have two experts and one experimenter/observer. A team of four will have three experts and one experimenter/observer.

Experimenter's Responsibilities

·  Plan the experiment.

·  Develop the tasks to be performed by the two experts.

·  Develop other needed materials for the usability project, as applicable.

·  Prepare the setting for the experiment ensuring that hardware and software are operational.

·  Collect data/observational notes about each expert's interaction with the system.

·  Interview each of the experts upon completion of tasks about his/her experience with the system. This can be formal or informal.

·  Review notes taken by each expert about his/her experience and draft an intial report that includes the experimenter's observations during the walkthrough.

·  Meet with the experts and coordinate the writing of the usability report.

Tasks

The team will identify the broad tasks to be performed in the selected system. Four tasks should be included. Two tasks should be closed or fact-finding and two tasks should be open-ended or research-based. The tasks should relate to the design features to be evaluated in the system.

The experimenter will choose the specific tasks to be performed in each of the interfaces, write the tasks, and assign them to the experts. The experimenter will also provide written instructions about the tasks.

Benchmarks

The experimenter will develop a set of benchmarks by which to measure the performance/interaction of each expert on each of the tasks. The benchmarks spell out the steps a user typically goes through to perform a task.

Example of constructing benchmarks for accessing Hodges' OPAC from the library’s website:

·  Open a Web browser (after logging in etc.).

·  Type the URL address of Hodge's library (http://www.lib.utk.edu).

·  Click on UT Library Catalog.

Roles of Experts

·  Each expert performs each of the four tasks independently.

·  Each expert records problems that the actual user may experience in performing each task and mention the reasons for these problems.

·  During the interaction, each expert will ask himself/herself questions as he/she performs each task vis-a-vis the actual user's characteristics. Such questions will guide the assessment of the interface vis-a-vis the actual user's needs, cognitive processes, and affective states (see sample questions below).

·  Each expert shares/discusses with the experimenter the problems found upon completion of the four tasks.

·  The experts meet with the experimenter to discuss the problems found, categorize and consolidate the problems, and describe why the actual user may experience them.

·  The experts and the experimenter provide suggestions for solving the problems found.

Sample Questions for Experts to Consider During the Interaction

·  How easy it is for the actual user to find/access the Aleph basic search interface?

·  What cognitive processes are required to complete the task?

·  What affective impact will interaction have on the user on the task?

·  What learning is required in order for the user to perform the task using advanced interface feature?

·  Is the action sequence for each step of the task appropriate in terms of design?

·  Is it possible that the user in mind would be able to perform each step in the task successfully? If not, what problems may the user in mind encounter?

·  Does the design lead the user in mind to generate the correct goals?

·  Does the interface provide enough cues as to what actions should be performed?

·  Does the interface help users recognize, diagnose, and recover from errors? If not, Why and what's the severity rating of these errors?

·  Is there a match between the system interface and the users' cognitive processes (does the system speak the users' language)?

Success Stories (optional)

As the experts, you will need to construct success stories for each task performed. When a success story cannot be told, construct a failure story (as applicable) and describe why the actual user may fail in performing all or part of a task.

Features for Success Stories

Common features for success stories are located at

http://www.pages.drexel.edu/~zwz22/CognWalk.htm, which provides characteristics of cognitive walkthrough. You may adapt these features to your particular situation.

Deliverables

The experimenter will meet with the experts, review the draft report, analyze the collected data, interpret the results with the help of the experts, and generate a final usability report. The writing of the report is the responsibility of the whole team.

The final report should include:

·  Names and roles of team members.

·  Name of the system interface evaluated.

·  A brief background about the system interfaces targeted for this usability assessment.

·  Tasks assigned to the experts to perform in the system.

·  The benchmarks for each assigned task mapped out against the actions taken by each expert to perform each task. This should also be articulated and discussed.

·  Success or failure stories (optional) on each task performed by each expert, or on each task performed by all experts if these stories are similar. This should be articulated and discussed.

·  A summary of the cognitive walkthrough of the two experts on each benchmarked task.

·  A list of system-related design problems and suggestions for improving or correcting them.

·  A paragraph summarizing the experience of each team member’s experience with the project.

·  A list of references used.

Additional Sources

Use the sources linked to the course schedule and those provided in the lecture slides on Usability Engineering. These sites may also be helpful for performing the cognitive walkthrough.

·  http://jthom.best.vwh.net/usability/cognitiv.htm. (What is cognitive walkthrough)

·  http://www.pages.drexel.edu/~zwz22/CognWalk.htm. (Characteristics of cognitive walkthrough).

·  http://www.cc.gatech.edu/computing/classes/cs3302/documents/cog.walk.html. (Performing a cognitive walkthrough).

·  http://psychology.wichita.edu/optimalweb/references.htm. (a list of references on optimal web design and usability)

Note: You may search the Web and online databases to find additional appropriate literature on this topic.

Evaluation Rubric

Success on this project is based on your level of understanding of the usability method, tasks, "actual users", and the system used for the evaluation. An evaluation rubric will be developed by the instructors by which to evaluate your project. The overall quality of the project, including organization, adherence to guidelines, evidence of reading relevant literature, synthesis, and writing style are among the evaluation factors that will be covered in the rubric.

2. Heuristic Evaluation Method

Using Nielsen's heuristics covered in class and additional appropriate rules of system inspection, perform a heuristic evaluation of both basic and advanced interfaces of the Aleph OPAC.

Teams

·  Each team will consist of a minimum of three experts.

·  Team members meet and decide on the features to inspect in the basic and advanced interfaces.

·  Each expert in a team will perform two passes of evaluation of the system interface. The first pass is to become familiar with the interface and the second pass is to focus on the system interface features to inspect & identify usability problems.

·  Each expert will rate a system interface feature on a four-point scale for how well it meets the criteria of each heuristic and other rules used.

·  Experts in a team meet to compare and discuss their findings.

·  Experts in a team consolidate and prioritize the usability problems found and provide suggestions for improving or correcting them.

Deliverables

Deliver a consolidated team report that includes:

·  Names of team members.

·  Name of system and interfaces evaluated.

·  Brief description of the features (e.g., keyword searching in Simple interface, browsing, keyword searching in Advanced interface) that were targeted in the evaluation.

·  A consolidated list of the usability problems found and their rating on a four-point scale indicating how well each of the problems meets the criteria implied in each heuristic and rule.

·  A documentation of all heuristic violations.

·  A discussion/justification of each heuristic violation.

·  Suggestions for improving/correcting each heuristic violation.

·  A paragraph summarizing the experience of each expert with this project.

·  A list of references used.


Sources

The best sources on heuristic evaluation are provided on Nielsen's website (http://www.useit.com/papers/heuristic/heuristic_list.html). A list of references on optimal web design and usability is found at:

·  http://psychology.wichita.edu/optimalweb/references.htm. (A list of references on optimal web design and usability).

Additional sources are listed in the lecture notes.

Evaluation Rubric

Success on this project is based on your level of understanding of the usability method, tasks, actual users, and the system interfaces used for the evaluation. An evaluation rubric will be developed by the instructors by which to evaluate your project. The overall quality of the project, including organization, adherence to guidelines, evidence of using relevant literature, synthesis, and writing style are among the evaluation factors that will be covered in the rubric.