Abstract
Virtual environments for gaming and simulation provide dynamic and adaptive experiences, but, despite advances in multisensory interfaces, these are still primarily visual experiences. To support real-time dynamic adaptation, interactive virtual environments could implement techniques to predict and manipulate human visual attention. One promising way of developing such techniques is to base them on psychophysical observations, an approach that requires a sound understanding of visual attention allocation. Understanding how this allocation of visual attention changes depending on a user’s task offers clear benefits in developing these techniques and improving virtual environment design. With this aim, we investigated the effect of task on visual attention in interactive virtual environments. We recorded fixation data from participants completing freeview, search, and navigation tasks in three different virtual environments. We quantified visual attention differences between conditions by identifying the predictiveness of a low-level saliency model and its corresponding color, intensity, and orientation feature-conspicuity maps, as well as measuring fixation center bias, depth, duration, and saccade amplitude. Our results show that task does affect visual attention in virtual environments. Navigation relies more than search or freeview on intensity conspicuity to allocate visual attention. Navigation also produces fixations that are more central, longer, and deeper into the scenes. Further, our results suggest that it is difficult to distinguish between freeview and search tasks. These results provide important guidance for designing virtual environments for human interaction, as well as identifying future avenues of research for developing “attention-aware” virtual worlds.
Reference
Jacob Hadnett-Hunter, George Nicolaou, Eamonn O’Neill, Michael Proulx