CHI '95 ProceedingsTopIndexes

Methods of Cognitive Analysis for HCI

Douglas J. Gillan, Nancy J. Cooke

Department of Psychology
New Mexico State University
Las Cruces, NM 88003
(505) 646-1408

Department of Psychology
New Mexico State University
Las Cruces, NM 88003
(505) 646-1630



This tutorial teaches participants about methods used to measure cognitive content, structure, and processes in an active hands-on manner, and how to apply those methods to HCI. The structure of the tutorial centers around the phases of a design process; the areas of cognition addressed are perception, memory, language, and thinking. For the initial analytical phase of design, the tutorial describes methods for measuring visual search, the structure of semantic memory, and process tracing. Methods for measuring readability and comprehension, as well as memory recall and recognition are applied to data from the second phase -- design and diagnostic testing. For the third phase -- system testing, the discussion focuses on scaling methods and statistical techniques.


Cognition, Cognitive Task Analysis, Design, User testing


Cognitive analysis for human-computer interaction (HCI) admits to two related interpretations: the analysis of cognition-intensive interactions with computers, such as learning, problem-solving, or reading; and the analysis of cognitive content, structures, and processes involved in any interaction with a computer. We address both interpretations by providing methods for analyzing cognition with a focus on interactions that involve a high degree of cognition. In addition, the analysis of users' cognitions should not be restricted to an early design phase (as task analysis typically is), but should be an important activity throughout the entire design process. However, many designers have little training in the methods used to measure cognition. Accordingly, the objectives of this tutorial are (a) to introduce interface designers and user test specialists to the principal methodologies used by cognitive psychologists to measure cognitive content, structures and processes, (b) to teach about these methods in a context strongly applied to interface design and testing,(c) to provide hands-on, active training in selected methodologies, and (d) to give designers and test specialists sufficient background and reference materials that they will be able to continue learning about these and related methods on their own.

The structure of the tutorial is based on a design problem and follows the flow of a typical design process from (a) an initial phase of collecting and analyzing data related to users and their tasks to (b) iterative design and diagnostic testing to (c) system testing. We will use the design of a computer-based expert assistant to help CHI attendees as the primary example throughout the tutorial. We also encourage participants to e-mail descriptions of their design and evaluation problems to us before the class or to bring them to class to be used as examples.

The tutorial concentrates on methods in the four major areas of cognition --perception, memory, thinking, and language. It begins by examining methods for perception, memory, and thinking in the initial design phase of intensive study of users and tasks. Following a brief description of the development of each method to set the context, the emphasis is on how to perform the method for analysis of users and tasks. Active learning is used to demonstrate each method through class participation in practice exercises. The discussion of the design and diagnostic testing phase of the design process, focuses on language (specifically reading) and memory testing; the discussion of system testing addresses category and direct scaling methods (traditional methods developed to investigate perception).


The first phase in the design process involves collecting and analyzing data about the users, their tasks, and the system with which they do the tasks. User analyses have frequently been restricted to demographics and general preferences. However, given the importance of users' knowledge to their interactions with a system, user analyses should also examine their general knowledge of the task domain and of the system. Task analyses often stop with a decomposition of the task into overt behavioral steps. Given the importance in HCI of specific information and the processing of that information at each step in a task, an analysis of the information that users perceive and of their thought processes would be very useful as part of the task analysis.


One of the first steps in an analysis of users' cognitions while interacting with a computer is to determine the information that they seek out and that they perceive. Accordingly, measurement of visual search is essential. To analyze visual search, psychologists have typically measured eye movements or search time and accuracy [6]. The tutorial describes a simple method for approximating where users look on a display, as well as the measurement of response time and accuracy for determining the information that users perceive. The practice exercise involves measuring visual search for information from CHI programs of different formats.


One of the key issues early in design concerns identifying what a user knows and the structure of that knowledge. Multivariate scaling techniques, such as Pathfinder, multidimensional scaling (MDS), and cluster analysis, have been used successfully to measure the content and structure of long-term memory [8]. The tutorial focuses on the use of Pathfinder to determine the structure of users' semantic networks in a cognitive task analysis for HCI, with a comparison of the Pathfinder and MDS. Participants receive practice using Pathfinder in determining the organization of the CHI topics in memory [e.g., 3], as well as a demonstration of MDS.


Researchers have used many different techniques for studying thinking, but process tracing [2] is one that applies especially well to HCI. Process tracing includes thinking aloud techniques, observation of nonverbal behaviors, and protocol analysis. These methods will be described and applied to cognitive task analysis, with special emphasis on interpreting verbal protocols and when not to use this technique (along with alternative methods to use in those cases). The practice exercise covers thinking aloud about planning to go to dinner with a group of CHI compatriots.


Following the analyses, the next phase involves an iterative cycle of design-test-redesign. The cognitive analysis methods described here are well suited to provide the diagnostic information useful for redesign.


Although user interfaces have become increasingly graphical, reading still is a frequent means by which users acquire information from (and about) computers. In addition, recent discussions of the information superhighway have suggested that much of the information traveling across it will be textual. Thus, the ability to comprehend a manual, written instructions, a text display, or a hypertext system is a critical feature of the user's interaction that needs to be designed and evaluated. General measures of readability, such as the FOG Index, are useful diagnostic tools for evaluating the usability of text information in any system [1]. Accordingly, the application of readability measures to HCI will be discussed. In addition, comprehension of text requires readers to process both its macrostructure and microstructure [5]. Consequently, the discussion of testing reading comprehension covers the measurement of the knowledge of both macrostructure (i.e., the general schema) and microstructure (i.e., the basic ideas in the text). Participants will measure reading time and comprehension of brief CHI papers varying in FOG index values.


As users interact with a system, they must learn a substantial amount about the interface, such as the meaning of commands or icons, the location of menu items in pull-down menus. In addition, users of computer-based instructional systems want to learn the content of the system. Memory researchers often use recognition tests and/or recall tests to study the processes involved in learning, particularly, encoding and retrieval [see 4]. Both tests can be useful, but provide somewhat different information. Two types of recognition tests will be described -- yes/no and forced choice --along with corrections for guessing. Three types of recall tests will be described -- free, cued, and associative -- with a focus on free recall. The practice exercise will involve determining the recall and recognition of different dialogue techniques.


After the system design is complete and final implementation nears its finish, a final test of the entire system, including the user interface is commonly performed. One of the valuable measurement instruments for a system test is scaling. We consider scaling as a perceptual method because that is a key area for which scaling was developed; however, its use in HCI testing also includes measuring preferences, beliefs, and opinions.


Cognitive researchers often use scaling methods to measure the strength of an attribute of an event. For system testing, scaling techniques are used to measure such system attributes as preferences, ease of use, and ease of learning. Psychological research makes use of two types of scaling methods -- category scales and magnitude estimation; in contrast, most usability testing has only applied category scaling [7]. The tutorial describes the differences between the procedures for both methods, as well as the differences in the data collected. In addition, the tutorial discusses various pitfalls for both techniques -- including response biases, and instructional and context effects -- and how to control for them. Some innovative ideas for analyzing scaling data are discussed, with an emphasis on how to get the most out of scaling data. The final practice exercise covers rating the usability of the computer-based CHI assistant using different scaling techniques.


  1. 1. Carr, T. H. (1986). Perceiving visual language. In K. R. Boff, L.Kaufman, & J. Thomas (Eds.), Handbook of perception and performance:Vol. II. Cognitive processes and performance. New York: Wiley.
  2. 2. Ericsson, K. A., & Simon, H. A. (1979). Verbal reports as data. Psychological Review, 87, 215 -251.
  3. 3. Gillan, D. J., Breedin, S. D., & Cooke, N. J. (1992). Network and multidimensional representation of the declarative knowledge of human-computer interface design experts. International Journal of Man-Machine Studies, 36, 587 - 615.

    4. Kantowitz, B. H., Roediger, H., & Elmes, D. (1991). Experimental psychology. St. Paul, MN: West.

    5. Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363 -394.

    6. Neisser, U. (1964). Visual search. Scientific American, 210 (June), 94 - 102.

    7. Nielsen, J. Usability engineering. Boston: Academic Press.

    8. Schvaneveldt, R. W. (1990). (Ed.) Pathfinder associative networks:Studies in knowledge organization. Norwood, NJ: Ablex.