论文部分内容阅读
This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device- and application-independent user interface model (VisualMan) of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task. The multimodal inputs are integrated in the prototype interface based on the priority of modalities and interaction context. The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.
A device- and application-independent user interface model (VisualMan) of 3D object selection and manipulation was developed and validated in a prototype interface based The multimodal inputs are integrated in the prototype interface based on the priority of modalities and interaction context. The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.