Abstract:
In order to improve input efficiency in the cross-device interaction environment, and reduce cognitive workload during the process of switching visual attention among different devices, the eye tracking based attentive user interface for cross-device was proposed. First, extracted the edges of devices’ screens, and then combined the geometry characters and the color histograms of the screens to detect different devices. The pupil center cornea reflection algorithm was used to calculate the gaze fixations’ coordinates with supporting head movement. Detected areas of interest based on the gaze dwell time and collaborative recognition schemes across devices. Furthermore, proposed the task management model for distributed attentive user interface, and utilized this model to control task allocating, pausing, continuing and evaluating. Besides, applied production rules to change user interfaces, and then defined related design guidelines. Finally, built a cross-device English reading prototype system, including the functions such as notification of the last reading positions and adaptive annotations of word explanation. The user study results showed that the participants’ visual attention detection accuracy was 94%, and the participants’ reading comprehension, reading efficiency and subjective satisfaction were also improved.