CHI '95 ProceedingsTopIndexes

Usability Evaluation with the Cognitive Walkthrough

John Rieman,* Marita Franzke,** and David Redmiles***

*MRC Applied Psychology Unit
15 Chaucer Rd
Cambridge CB2 2EF
**US WEST Technologies
4001 Discovery Dr.
Boulder, Colorado, 80303
***Department of Information and Computer Science
University of California
Irvine, California 92717-3425



The cognitive walkthrough is a technique for evaluating the design of a user interface, with special attention to how well the interface supports "exploratory learning," i.e., first-time use without formal training. The evaluation can be performed by the system's designers in the early stages of design, before empirical user testing is possible. Early versions of the walkthrough method relied on a detailed series of questions, to be answered on paper or electronic forms. This tutorial presents a simpler method, founded in an understanding of the cognitive theory that describes a userUs interactions with a system. The tutorial refines the method on the basis of recent empirical and theoretical studies of exploratory learning with display-based interfaces. The strengths and limitations of the walkthrough method are considered, and it is placed into the context of a more complete design approach.


Cognitive walkthroughs, usability inspections, exploratory learning, software engineering.


One of the basic lessons learned in the area of HCI is that usability evaluation should start early in the design process, optimally in the stages of early prototyping. The earlier critical design flaws are detected, the greater the chance that they can and will be corrected. Empirical usability testing, still the most comprehensive evaluation technique, however, is expensive and requires at least a working prototype. Traditionally, it is used at the end of the design cycle, where changes to the interface can be costly and difficult to implement. Unfortunately, usability recommendations given at this time are therefore often ignored.

The cognitive walkthrough was developed as an additional tool in usability engineering, to give design teams a chance to evaluate early mockups of designs quickly. It does not require a fully functioning prototype, or the involvement of users. Instead, it helps designers to take on a potential userUs perspective, and therefore to identify some of the problems that might arise in interactions with the system.


The cognitive walkthrough is practical evaluation technique grounded in Lewis and Polson's CE+ theory of exploratory learning [3,4,5]. The CE+ theory is an information- processing model of human cognition that describes human- computer interaction in terms four steps:

1) The user sets a goal to be accomplished with the system (for example, "check spelling of this document").
2) The user searches the interface for currently available actions (menu items, buttons, command-line inputs, etc.).
3) The user selects the action that seems likely to make progress toward the goal.
4) The user performs the selected action and evaluates the system's feedback for evidence that progress is being made toward the current goal.

For most realistic tasks that a user would attempt with a system, these four steps are repeated many times to achieve a series of subgoals that define the complete task. The cognitive walkthrough examines each of the correct actions needed to accomplish a task, and evaluates whether the four cognitive steps will accurately lead to those actions.


Prerequisites to the walkthrough include: (1) a general description of who the users will be and what relevant knowledge they possess, (2) a specific description of one or more representative tasks to be performed with the system, and (3) a list of the correct actions required to complete each of these tasks with the interface being evaluated.

The walkthrough is typically performed by the interface designer and a group of his or her peers. Small-scale walkthroughs of parts of an interface can also be done by individual designers as they consider alternative designs. In a group situation, one of the evaluators usually takes on the duties of "scribe," recording the results of the evaluation as it proceeds, and another group member acts as facilitator, to keep the evaluation moving.

With the prerequisites assembled and duties assigned, the walkthrough process involves examining each individual step in the correct action sequence and trying to tell a believable story about why the prospective user would choose that action. Note that this is not an open forum approach of predicting what activities the user might engage in, given this interface and task. It is specifically limited to considering whether the user will select each of the correct actions along the solution path.

In many cases, the group of evaluators will readily agree that the user will select the correct action, and no further analysis is required. For example, the first action in using a Macintosh program may be to double-click its icon; the evaluators could readily agree that experienced Mac users would have little trouble with this step. Other cases, however, may be less clear. To assess the ease with which the correct action will be selected, the walkthrough process suggests four criteria for evaluating the stories told about the users' actions.

The four criteria for evaluating the stories directly reflect the information-processing model that underlies the walkthrough. They ask the evaluators to consider the user's goal, the accessibility of the of the correct control, the quality of the match between the control's label and the goal, and the feedback provided after the control is acted on.


Recent experimental work has provided support for the theoretical assumptions underlying the cognitive walkthrough method [1]. New users of display-based (GUI) applications employ a strategy of first scanning the interface for a well-labeled action, and then quickly narrowing their search by selecting that action. If further options are displayed as a result, the scan-search cycle will be continued until the guiding goal has been accomplished.

The success of this strategy is dependent on the saliency of the next correct move in the interaction. Four design features determine whether an action will "pop out" at a first-time user. (1) Subjects will try label-guided actions first (menu items, buttons, etc.) before they experiment with direct manipulations of unlabeled objects (tools, double clicking, moving of objects). (2) A well-labeled action will be especially salient. (3) Providing few actions in the search set can help to narrow the search if labeling cannot be provided, or if criteria for a "good" label are difficult to establish. (4) Set effects may prevent users to try untypical actions. (5) Users are reluctant to extend their search beyond the readily available menus and controls. Frequently used interfaces techniques may bias users to search for them rather than for less frequent techniques. These findings suggest that evaluators should check for the type of interaction, the quality of the label, the number and grouping of alternative choices, and consider the overall "flavor" of the interaction techniques when evaluating the availability of actions and label matches.


Early evaluations of the cognitive walkthrough method criticized the tedium of form-filling, and the narrow band of the problems noted [5,6]. The current version and recommendations for its application therefore rely on a minimal form. We suggest involving small groups of evaluators, rotating record-keeping and other duties, evaluating simple tasks first, keeping track of all problems identified in the process of the walkthrough (even though not discovered by it), and loosening the forms orientation once the evaluators are familiar with it.


As described above, the cognitive walkthrough procedure supports software developers in the "upstream activities" of identifying and refining requirements and specifications. It can be combined with other user-centered evaluation techniques to yield software products that more closely match users' work contexts. These techniques affect the software development process by specifically incorporating the assumption that requirements will change incrementally. Under this assumption, developers must plan for change, along with the additional costs and reduced predictability that change implies.

One avenue for developers to reduce the cost of evaluation and simultaneously make it more helpful in workplace settings is through program instrumentation. An approach using "expectation agents" adapts the cognitive walkthrough procedure to support the evaluation of prototypes with real end users in their work places [2]. Expectation agents monitor users working with the prototype and report mismatches between developers' expectations and a system's actual usage. Simultaneously, the agents provide end users with an opportunity to communicate with developers, either synchronously or asynchronously.


The cognitive walkthrough is an usability evaluation method based on cognitive theory. The tutorial presents the basic methodology and indicates how it fits into the software development cycle.


1. Franzke, M. (1994). Exploration and Experienced Performance with Display-Based Systems. Ph.D. Dissertation, Department of Psychology, University of Colorado; to be submitted as CHIU95 Technical Paper.

2. Girgensohn, A., Redmiles, D., Shipman, F. (1994) Agent-Based Support for Communication between Developers and Users in Software Design. Proceedings of the 9th Annual Knowledge-Based Software Engineering (KBSE-94) Conference (Monterey, CA), IEEE Computer Society Press, Los Alamitos, CA, September 1994.

3. Lewis, C., and Rieman, J. (1993). Task-Centered User Interface Design Q A Practical Introduction. Distributed via anonymous ftp (Internet address:

4. Polson, P.G., Lewis, C., Rieman, J., and Wharton, C. (1992). Cognitive walkthroughs: A method for theory- based evaluation of user interfaces. International Journal of Man-Machine Studies 36, 741-773.

5. Wharton, C., Rieman, J., Lewis, C., and Polson, P. (1994). The Cognitive Walkthrough Method: A Practitioner's Guide. In Usability Inspection Methods, J. Nielsen and R.L. Mack (Eds.), New York: John Wiley & Sons, pp.105-141.

6. Wharton, C., Bradford, J., Jeffries, R., and Franzke, M. (1992). Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues, and recommendations. Proceedings CHIU92 (Monterey, CA, 3-7 May, 1992).


Return to top