CHI '95 ProceedingsTopIndexes
PapersTOC

Improving GUI Accessibility for People with Low Vision

Richard L. Kline and Ephraim P. Glinert
Computer Science Department
Rensselaer Polytechnic Institute
Troy, NY 12180
E-mail: {kliner | glinert}@cs.rpi.edu

© ACM

Abstract:

We present UnWindows V1, a set of tools designed to assist low vision users of X Windows in effectively accomplishing two mundane yet critical interaction tasks: selectively magnifying areas of the screen so that the contents can be seen comfortably, and keeping track of the location of the mouse pointer. We describe our software from both the end user's and implementor's points of view, with particular emphasis on issues related to screen magnification techniques. We conclude with details regarding software availability and plans for future extensions.

Keywords:

workstation interfaces, assistive technology, low vision, screen magnification, X Window System


Introduction

The move towards graphical user interfaces is widely regarded as an advance in human-computer interaction. However, the abandonment of the old-fashioned text based TTY interface presents new challenges to those computer users who are visually impaired [17]. How can we correct this problem?

Our ultimate goal is clear: To allow the user full and equal access, as if he/she enjoyed normal vision, to any tool or application that he/she may choose to run, whether the output is textual or graphical in nature.

Much work has been done toward this end for users who are blind [5,18]. Our own research is intended to assist people who have low vision but who are not blind. To this end, we have attempted to improve the usability of existing graphical interfaces, rather than devise new or alternative interfaces [11,13]. We have concentrated our efforts on two critical aspects of interaction: selectively magnifying areas of interest so that the contents can be seen comfortably; and finding and controlling the location of the mouse pointer.

In the personal computer market, products that address these issues are now widely available. These span a broad gamut, in terms of both complexity and cost. There are student projects such as Vener's Magnex text editor for the Commodore AMIGA [19] (available from the second author by special request). The CloseView program developed by Berkeley Systems Design is included as a standard feature of the Apple Macintosh operating system; a more elaborate version of this software is marketed by the developers for several hundred dollars under the name inLarge. Specially designed software and hardware produced by companies such as TeleSensory Systems provide the most power, but typically sell for thousands of dollars.

In the comparatively small workstation market, where MIT's X Window System commonly provides the graphics interface engine, there has until now been a paucity of viable solutions for the low vision user community. Old ASCII based aids (such as the large font virtual terminal to UNIX developed by the second author over a decade ago [9]) can sometimes still be run within a window, but are otherwise useless. Because many professionals use workstations rather than personal computers in their work, we chose to implement our collection of programs, known as UnWindows V1, for Sun SPARCstations running X.

A DYNAMIC MAGNIFIER

Magnification is one method commonly employed to help low vision users deal with the small type fonts, illustrations and icons present in much of today's printed media and computer displays. Some features of X reduce, but do not eliminate, the need for a separate magnifier tool. Many text-only X applications allow the user to override the default font settings, but the range of alternative sizes is limited and dependent upon the fonts available on a given display. A more significant problem is that many applications make use of graphical elements as well as text, and it is rare to find an application that will allow the user to specify the displayed size of these nontextual elements. Furthermore, an application's default window might be so large that it would not fit on a display if magnified.

In designing the UnWindows dynamag screen magnification program, we considered two typical uses for physical magnifying glasses. To read the fine print of a legal contract or automobile advertisement, one places the document on a table or other flat surface and moves the glass about as needed to inspect different areas of the document. To assemble or repair a small device, on the other hand, an electronics technician positions a glass in a fixed location where it will provide the best view of the work area, and then works on the device directly while looking at the image produced by the glass.

In both cases the user can examine and work in areas not under the magnifier without any special effort, and can also reposition the glass, temporarily move it out of the way, or even peer around it to gaze directly at the object of interest, as the need arises. The ability to correlate the magnified view with the reality of the object(s) being viewed is what allows the use of a magnifying glass to become effortless for most of us after a very short learning period. What are the ramifications of our observations to the design of a virtual counterpart to the physical glass?

In the case of a hand-held magnifying glass, the physical separation of the glass from the surface being viewed makes it trivial to keep track of one's place on the page. Where a computer display is concerned, however, the problem of retaining a sense of global context manifests itself, because the magnifying glass becomes part of the screen. This problem is most acute in systems which use the entire screen area to draw an enlarged image of a portion of the display. There is no good way to look ``around'' the magnified view to the unmagnified image ``beneath'' it, although some intriguing initial efforts have recently been directed at this problem [10].

In theory, this difficulty might be alleviated by imparting to the magnifier the ability to automatically reposition itself in reaction to screen events (e.g., user typing or process output). But on a busy X display, the contents of several windows can change in rapid succession. A naive magnifier that attempted to follow all screen activity would jump around (thrash) hopelessly, imparting nothing but confusion to the user.

Providing global screen context while devoting significant screen real estate to a magnification window are conflicting goals that can only be resolved through compromise. Our solution is to relegate the UnWindows dynamag magnifier to a window on the screen, and to support two distinct modes of operation, designed to emulate the real-world examples given above:

Note that two of these options break the customary link between the portion of the screen of interest and the position of the magnified image. By affording control over the portion of the screen which is obscured as well as that which is magnified, we allow the user to minimize the loss of global context on an individual basis and in response to changing circumstances. Some of these ideas have also been recently and independently implemented as ``portals'' in Bederson and Hollan's Pad++ system [1].

DYNAMAG FEATURES

The dynamag program's interface allows the user to easily modify:

The magnifier window is resized using whatever techniques are provided by the window manager being run by the user. The other preferences are changed with the help of a pop up window; cf. Figure 1. New settings are immediately reflected by dynamag as they are entered. As with the other UnWindows programs, preferences are stored in a file in each user's home directory.

   Figure 1: The control panel for the dynamag application, shown much smaller than actual size.

Let us now examine each of dynamag's two modes of operation in more detail.

Mobile Mode

The screen area which is magnified is not centered around the pointer's location, as one might initially expect. Instead, the area immediately above and to the right of the pointer is magnified. This is done in an effort to reduce the loss of local context that results from the magnifier obscuring that part of the display immediately surrounding the pointer. For example, when using our method for the common task of reading a paragraph of text, moving the mouse pointer to the beginning of a new line makes that line (and possibly lines above it as well) visible within the magnifier, while the line immediately below can also be seen (unmagnified).

Because dynamag's screen window, if sticky to the mouse pointer, obscures, once drawn, the very area on the screen which it is magnifying, this window must be removed whenever dynamag requires a new screen image and then redisplayed. This refresh process can prevent dynamag from performing as smoothly as one might wish. The user is therefore given the option of having the application window which displays the enlarged image remain stationary, although the area of the screen to be magnified still is automatically chosen based on the current location of the mouse pointer. When functioning in this way, dynamag's mobile mode performs in a manner that is noticeably smoother (in the absence of frequent window creation and deletion), although we lose the direct analogy with a physical magnifying glass.

Anchored Mode

When the user exits dynamag's mobile mode, the program automatically notes the last screen area that was selected for enlargement. In this way, the magnifier becomes anchored to that part of the display. The dynamag window itself can now be moved to any (other) location on the screen, and the magnified area remains the same. Interesting results are obtained when the dynamag window itself is moved into the area currently being magnified. (Footnote 1)

Once the magnification window is positioned where desired, the user can interact with the screen as usual, performing work within the window(s) of interest while watching the magnified image being presented in a different location. Interaction with areas of the screen not being magnified does not affect dynamag's operation in any way.

A typical use for this mode of operation is illustrated in Figure 2. The magnification area has been selected (by means of the mobile mode described above) to include the bottom several lines of an xterm window. Once the magnifier has been properly anchored, the dynamag window is moved by the user to a convenient location elsewhere on the screen. With the update interval set to a small value such as 0.25 seconds, the user can type and read from the dynamag window while interacting with the application window.

  Figure 2: A screen shot of UnWindows in use. The dynamag window, at bottom center, magnifies several lines of the xterm window at top left as well as parts of nearby icons.

DYNAMAG IMPLEMENTATION

The dynamag magnifier works by directly polling the X screen, using the Xlib-level XGetImage() routine to find out what is currently being displayed. Obtaining display information at this low level allows dynamag to magnify any image on the screen. We borrowed this method from xmag, a sample application distributed with the X Window System. Each individual pixel in the captured screen image is redrawn within the magnifier's window as a square whose sides are from 2 to 9 pixels in length, depending on the magnification level selected by the user. At progressively higher magnifications this approach leads to text and graphics that look somewhat ``blocky'' and unaesthetic, but we believe this is of less importance to the target user community than speed of performance, which would have to be sacrificed if some form of smoothing algorithm were added to the drawing process.

We experimented with moving the entire dynamag application around on the screen when the program was in mobile mode, but found the performance to be insufficient, in that window movement sometimes lagged noticeably behind that of the pointer. Instead, we hide the application window from the screen when entering this mode, using the low level Xlib window management routines for this purpose. The resulting mix of function calls to the two libraries proved to be fragile, and some experimentation was required to arrive at an implementation that correctly processes all of the incoming X events.

The automatic refresh of an anchored magnification window has been implemented through the use of the timeout facility provided by the Xt library calls XtAppAddTimeOut() and XtRemoveTimeOut(). As originally coded, we often noticed a performance lag in the response of dynamag's command buttons when the selected magnification area was large. We determined that interaction between the timeout function calls and processing of other X events was the cause. To solve the problem, we modified our event handler to prevent the backlog of events that we were seeing.

Even with these issues resolved, the sheer amount of pixel data continuously transferred between dynamag and the display server remains the largest hindrance to program performance when the magnification area becomes large. Adding code to take advantage of the shared memory extension to X would improve the program's speed, but only if the display belongs to the workstation running dynamag. At the time of this writing, the Disability Action Committee for X (DACX) is working to complete a screen magnification extension designed specifically to facilitate the writing of screen magnification programs such as dynamag. We hope to rewrite portions of dynamag to take advantage of this new extension when it becomes available.

RELATED MAGNIFICATION SYSTEMS

There is an important distinction between screen magnifiers such as dynamag and special purpose document viewing programs which have built-in ``magnifiers'' of their own. For example, xdvi, which displays files in the format produced by TeX and LaTeX, has a magnification feature which operates very smoothly. Because document viewers such as xdvi have a complete, static representation of the entire image to be displayed when execution starts, some or all of the image can be precomputed. In contrast to this, dynamag must frequently obtain an updated snapshot of the screen display, which can change nearly continuously.

Several teams investigating data visualization have recently explored the idea of displaying all of a large graph or document, so as to allow the user to retain global context, while showing full (enlarged) detail for one or several portions of the data being viewed [15,16]. This is accomplished by shrinking and distorting those parts of the information that are not currently being magnified. It must be noted, however, that this work is again aimed at the display of a single, static document, while a workstation display is very rarely static. We must support an interaction paradigm in which the user may have to refer frequently to areas of the screen away from the current task focus (i.e., the area under magnification) when information on the display changes.

Chin-Purcell's puff program [4] is the only system other than UnWindows of which we are aware that is designed to provide general screen magnification under X. The program operates in a mode very similar to our mobile mode with a stationary display window. It can also be set up to reposition the magnification area automatically in reaction to screen output. The mechanism requires that individual applications register themselves with puff when starting up. This allows puff to ignore screen changes deemed unimportant by the user, such as the redrawing of a clock every minute, but it places the burden on the user to make sure that he/she registers every ``important'' application and pop-up window that might appear on the screen.

FINDING THE MOUSE POINTER

Knowing where the mouse pointer is situated on the screen is essential to interacting with today's graphical computer interfaces. Yet even users with normal vision often have difficulty in seeing the pointer on a display populated with many windows and icons! UnWindows V1 provides a set of tools that utilize both the visual and aural sensory modalities to convey clues as to the pointer's location.

Visual Feedback

An obvious first attempt at making the pointer easier to see is to make it larger. However, under the X Window System this is difficult to achieve, because each individual application window has the ability to define the local shape of the pointer (that is, how it should appear within the borders of that window). Some of these applications allow the user to redefine the pointer shape (referred to in X as the cursor), but many do not.

Our solution is an external visual indicator, a dynamic icon in a fixed location, to assist in highlighting the pointer's position. The UnWindows coloreyes utility is a modified version of the xeyes program that comes bundled with the X Window System software from MIT. The xeyes program draws a stylized pair of eyes which continually gaze toward the mouse pointer. This provides directional information. In our enhanced version, the eyes also give distance information by changing color. The user can easily modify:

Preferences may be set by the user from a graphical settings window, shown in Figure 3. The top portion of the window displays the current colors within a partitioned box. The color representing the area closest to the eyes is situated in the leftmost patch of the box. The relative width of each color patch indicates the percentage of relative screen distance that is represented by that color.

   Figure 3: The control panel for the coloreyes application, shown much smaller than actual size. The rectangle in the upper right corner shows current colors and associated distances.

The bottom area of the settings window is an RGB color mixer, which allows the user to change existing colors and to set colors used for new partitions. The three sliders represent the intensities of red, green, and blue; the color resulting from the mixture of these values is displayed in the box to their left. Allowing the user to change colors enables him or her to satisfy personal aesthetic preferences. More importantly, however, this feature is essential for users who suffer from various forms of color blindness, or for whom vision is highly dependent upon the contrast between foreground and background.

Aural Feedback

While coloreyes provides pointer location information in a geographic sense (i.e., direction and distance from a fixed point), sound cues are used to provide an alternative frame of reference in terms of the basic components of the interface: windows, icons, and the display borders. A collection of three programs provides these audio functions.

We have created a modified version of the public domain twm window manager to add a sound playing capability. In particular, we have augmented the code for the HandleEnterNotify() function, which is executed whenever the mouse pointer has been moved into a window on the screen. UnWindows keeps a file of window/sound associations unique to each user. Each entry contains a window name, the name of an audio file, and a number representing volume level. When a new window is entered, we compare the name of that window with the names stored in the user's settings file. If a match (Footnote 2) is found, the associated audio file is played on the system speaker at the specified volume.

Two companion programs provide additional functionality to the audio capabilities of UnWindows. One of these monitors the pointer's screen location and plays a sound whenever the pointer moves within a threshold (default = 5 pixels) of the edge of the screen. Each of the four screen edges can be assigned a different sound and/or volume.

The second utility allows the user to create and update his/her personal window/sound association list. Sounds may be previewed, and the desired volume setting for each modified at will. Since recording levels vary from one audio file to another, it is necessary to provide individual playback volume control for each sound. A sample window/sound association list is shown in Figure 4, which shows a portion of the interface for the list maintenance utility program.

   Figure 4: The window/sound association list maintenance program.

An initial question in the design of these tools was the source of sounds to be played. Previous work by Gaver [6] and by Blattner et al. [3] in the use of sound allowed their systems to assign unique sounds to the complete set of objects and actions they wished to identify aurally. In UnWindows V1, however, the set of sounds we needed had to correspond to the application windows which might appear on an X display - a large and ever-growing set. Thus, we feel that our method of letting the user select sounds to represent windows is the most appropriate mechanism for our case.

We do not provide a specific utility within UnWindows V1 to allow the user to record new sounds. However, a number of programs are available for this purpose, including AudioTool, one of the applications which is included with Sun SPARCstations. Although we did not pursue it, the addition of speech synthesis hardware would add further flexibility to the system. Currently, when a window is selected whose name is not in the user's window/sound association list, no sound is played. A speech synthesizer could pronounce the names of those windows not associated with any sound. Unfortunately, speech synthesis is neither common nor inexpensive on workstations running X.

Implementation Issues

Our changes to the original xeyes program consist entirely of additions to the code. The method used to compute the pupils' positions, for instance, remains unchanged. On the other hand, the color in which the pupils and the eyes' outlines is rendered can no longer be set at the beginning of program execution, but must instead be continuously recalculated along with pupil positioning.

The method of computing the color used to render the eyes of coloreyes is illustrated in Figure 5. An imaginary line is drawn from the center of the coloreyes icon E, through the current pointer location P, until it intersects one of the borders of the screen at point B. The resulting ratio of distances is computed as R = | P - E | / | B - E | , which will have a value between zero and one. The user's current color settings are then examined and the appropriate hue chosen.

   Figure 5: The formula used to determine the color of the eyes in the coloreyes program.

To impart audio output to UnWindows V1, we made some minor modifications to the play program included with the SunOS operating system in the /usr/demo/SOUND directory, incorporating it into our modified twm. All audio files to be played by UnWindows must be encoded in the Sun standard 8 bit mu-law format which the play program recognizes. Audio files in other formats can be converted into 8 bit mu-law by programs such as the public domain SOX, written by Norskog et al [14].

Unlike the other UnWindows V1 programs, those which play audio files are compatible only with Sun SPARCstations at this time. For the future, we can hope for an audio standard among workstation manufacturers, comparable to X as a graphics standard. For now, however, users wishing to port UnWindows to a new architecture must modify the sound generation function appropriately.

USER FEEDBACK

UnWindows V1 has been released to teams within several organizations, including IBM, Sun Microsystems, DACX, the University of Washington's Adaptive Technology Laboratory, and RPI. In addition to conducting informal small scale user testing with visually impaired individuals (the second author among them), we distributed a user survey to all who requested copies of UnWindows directly from us. Overall, the comments received have been encouraging and positive.

The coloreyes program consistently received positive comments from both visually impaired and normal-sighted users. One visually impaired user commented that while at first he thought of coloreyes as ``nothing more than a nifty frill,'' he found after continued use that it was a natural, ``almost subconscious'' aid in locating the screen pointer. This same user reported that he found it too distracting to have all of the windows generating sounds; he configured his system to make a ``non-intrusive click'' only when the pointer approached the screen borders.

Reaction to dynamag has been mixed. Users vary in their preference of magnification mode. Some have noted that, as discussed above, the program's performance can be less than desirable under certain circumstances, and have suggested the addition of mouse and/or keyboard accelerators (similar to those in puff) which would allow changes to dynamag's behavior without having to manipulate the program's interface windows directly. Such a feature would have to be designed with great care, however, so as to eliminate (or at least minimize) conflicts with applications that use the same accelerators for different functions.

SYSTEM AVAILABILITY

UnWindows V1 is freely available via anonymous ftp; for more information, please contact the first author. The tools are written in C and utilize only the Xt and Xaw toolkits provided with the standard X Window System release. The utilities are independent of one another, so that the user can choose to run any or all of them, as required. A detailed exposition of functionality from the user's viewpoint may be found elsewhere [12].

Our programs were developed for Sun SPARCstations running the SunOS operating system. However, users wishing to compile UnWindows for other systems should need to modify only those portions of the code that generate audio output. All of the UnWindows utilities have been tested under mwm and olwm, two popular alternative window managers (except our modified twm, which is itself a window manager). In addition, users of UnWindows V1 have compiled dynamag and coloreyes to run on DEC (both MIPS and Alpha based) and IBM workstations, without modification.

PLANNED ENHANCEMENTS

UnWindows continues to evolve. V2, currently under development by the authors and G. Bowden Wise, will help users who are blind, who are hearing impaired, and more [8]. The ultimate objective is to develop transparent interface software which will afford access to (certain categories of) applications without modification. Is this achievable? With the right technology, we believe so. Indeed, our work on UnWindows V2 is not aimed solely at people with disabilities. Rather, we are investigating a new approach to multimodal systems in general that we hope will prove broadly applicable.

We seek to develop a new multimedia interaction technology, in which information is not merely regurgitated ``as is'' but rather is first processed at a high level of abstraction and then distributed among the sensory modalities as required. The hypothesis is that it is impractical for designers of any but the simplest multimedia interfaces to rigidly allocate the output of their systems to specific human sensory channels, because what constitutes acceptable output may depend upon factors which cannot be known when the code is written (e.g., the extra-machine environment, the need to avoid sensory overload due to other applications running concurrently, and of course the need to accommodate a disability).

These observations have led Glinert and Blattner to propose a new class of object in the interface called the metawidget [2,7]. These abstractions of the widgets with which we are familiar consist of clusters of alternative representations for some information, along with built-in method(s) for selecting among them. The selection methods, as well as the representations themselves, may be time dependent. The metawidget run time environment maintains data on currently active widgets, computes the total cognitive load (according to system specified criteria) to detect overloading, and then posts and/or modifies representations as required. The technology supports a layered approach to multimodal interface construction: a visual or aural toolkit is used to represent information within a modality, while metawidgets constitute the higher level building blocks across modalities.

Properly designing a metawidget's palette of representations, and the mechanism for selecting among them, will clearly be very tricky. Although many open questions remain, we nevertheless hope to have a prototype of UnWindows V2 that embodies metawidget technology available for distribution and preliminary user testing later this year. The implementation is being carried out in C++ on a platform that consists of an IBM PC with enhanced sound output capabilities running Microsoft Windows. For additional information, please contact the second author.

ACKNOWLEDGEMENTS

This research was supported, in part, by the National Science Foundation under contracts CDA-9015249, CDA-9214887, CDA-9214892 and IRI-9213823.

An early version of UnWindows was designed and implemented by Gary Ormsby, who is now with IBM in Austin, Texas.

References

1
B. B. Bederson and J. D. Hollan. Pad++: A Zooming Graphical Interface for Exploring Alternate Interface Physics. In Proc. 7th Annual Symposium on User Interface Software and Technology (UIST'94), Marina del Rey, November 2-4, 1994, pages 39-48. ACM Press.

2
M. M. Blattner, E. P. Glinert, J. A. Jorge, and G. R. Ormsby. Metawidgets: Towards a Theory of Multimodal Interface Design. In Proc. COMPSAC'92, Chicago, September 22-25, 1992, pages 115-120. IEEE Computer Society Press.

3
M. M. Blattner, D. A. Sumikawa, and R. M. Greenberg. Earcons and Icons: Their Structure and Common Design Principles. Human-Computer Interaction, 4(1):11-44, 1989.

4
K. Chin-Purcell. Puff computer software. Available (as of this writing) via anonymous ftp from ftp.arc.umn.edu.

5
W. K. Edwards and E. D. Mynatt. An Architecture for Transforming Graphical Interfaces. In Proc. 7th Annual Symposium on User Interface Software and Technology (UIST'94), Marina del Rey, November 2-4, 1994, pages 39-48. ACM Press.

6
W. W. Gaver. The SonicFinder: An Interface That Uses Auditory Icons. Human-Computer Interaction, 4(1):67-94, 1989.

7
E. P. Glinert and M. M. Blattner. Programming the Multimodal Interface. In Proc. 1st ACM Int. Conf. on Multimedia (MULTIMEDIA'93), Anaheim, August 2-6, 1993, pages 189-197. ACM Press.

8
E. P. Glinert, R. L. Kline, G. R. Ormsby, and G. B. Wise. UnWindows: Bringing Multimedia Computing to Users with Disabilities. In Proc. IISF/ACMJ International Symposium on Computers as Our Better Partners, Tokyo, March 7-9, 1994, pages 34-42. World Scientific.

9
E. P. Glinert and R. E. Ladner. A Large-Font Virtual Terminal Interface: A Software Prosthesis for the Visually Impaired. Communications of the ACM, 27(6):567-572, June 1984.

10
H. Lieberman. Powers of Ten Thousand: Navigating in Large Information Spaces. In Proc. 7th Annual Symposium on User Interface Software and Technology (UIST'94), Marina del Rey, November 2-4, 1994, pages 15-16. ACM Press.

11
E. P. Glinert and B. W. York. Computers and People with Disabilities. Communications of the ACM, 35(5):32-35, May 1992.

12
R. L. Kline and E. P. Glinert. X Windows Tools for Low Vision Users. SIGCAPH Newsletter, Number 49, pages 1-5, March 1994.

13
M. Krell. LVRS: The Low Vision Research System. In Proc. First ACM Conf. on Assistive Technologies (ASSETS'94), Marina del Rey, October 31-November 1, 1994, pages 136-140. ACM Press.

14
L. Norskog et al. SOX (Sound Exchange) computer software. Available (as of this writing) via anonymous ftp from ftp.cwi.nl and other sites.

15
G. G. Robertson and J. D. Mackinlay. The Document Lens. In Proc. 6th Annual Symposium on User Interface Software and Technology (UIST'93), Atlanta, November 3-5, 1993, pages 101-108.

16
M. Sarkar, S. S. Snibbe, O. J. Tversky, and S. P. Reiss. Stretching the Rubber Sheet: A Metaphor for Viewing Large Layouts on Small Screens. In Proc. 6th Annual Symposium on User Interface Software and Technology (UIST'93), Atlanta, November 3-5, 1993, pages 81-91.

17
G. C. Vanderheiden. Nonvisual Alternative Display Techniques for Output from Graphics-Based Computers. J. Visual Impairment and Blindness, 83(8):383-390, October 1989.

18
G. C. Vanderheiden, W. Boyd, J. H. Mendenhall, and K. Ford. Development of a Multisensory, Nonvisual Interface to Computers for Blind Users. In Proc. 35th Annual Meeting of the Human Factors Society, pages 315-318, 1991.

19
A. R. Vener and E. P. Glinert. MAGNEX: A Text Editor for the Visually Impaired. In Proc. 16th Annual ACM Computer Science Conference, Atlanta, February 23-25, 1988, pages 402-407.

FOOTNOTES:

1
The effect is quite similar to that obtained by pointing a video camera at a monitor which is displaying the output of that same camera.Return to text

2
When performing name comparisons, we do not check for exact matches. Rather, a window name is considered to match a name in the association list as long as its first characters exactly match an entry in the list. This allows the desired matching to occur even for applications that change their window titles.Return to text



Richard L. Kline