Attentional limitations with Head-Up DisplaysRecent models of visual information processing suggest that visual attention can be focussed on either Head-Up Displays (HUD) or on the world beyond them, but not on both simultaneously. This hypothesis was tested in a part-task simulation in which subjects viewed a simulated approach to a runway with a HUD superimposed. An alphanumeric cue ('IFR' or 'VFR') appeared on either the HUD or the runway and was followed by two sets of three geometric forms; one set on the HUD and one set on the runway. Each set contained one potential target, either a stop sign or a diamond. If the cue spelled 'IFR', subjects made a speeded response based on the identity of the HUD target; if the cue spelled 'VFR', subjects made a speeded response based on the identity of the runway target. Regardless of cue location (HUD or Runway), responses were faster when the cue and the relevant target were part of the same perceptual group (i.e., both on the HUD or both on the runway) than when they were part of different perceptual groups. These results, as well as others, suggest that attentional constraints place severe limits on the ability of pilots to process HUD-referenced information and world-referenced information simultaneously. In addition, they provide direct evidence that transitioning from processing HUD information to processing world information requires an attention shift. Implications for HUD design are considered.
Mccann, Robert S. (Sterling Software, Palo Alto, CA US, United States)
Foyle, David C. (NASA Ames Research Center Moffett Field, CA, US, United States)
Johnston, James C. (NASA Ames Research Center Moffett Field, CA, US, United States)