SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR
Hyunsung Cho,
Naveen Sendhilnathan,
Michael Nebeling,
Tianyi Wang,
Purnima Padmanabhan,
Jonathan Browder,
David Lindlbauer,
Tanya R. Jonker,
Kashyap Todi.
Published at
ACM UIST
2024
Abstract
We introduce SonoHaptics, an audio-haptic cursor for gaze-based 3D object selection. SonoHaptics addresses challenges around providing accurate visual feedback during gaze-based selection in Extended Reality (XR), e.g., lack of world-locked displays in no- or limited-display smart glasses and visual inconsistencies. To enable users to distinguish objects without visual feedback, SonoHaptics employs the concept of cross-modal correspondence in human perception to map visual features of objects (color, size, position, material) to audio-haptic properties (pitch, amplitude, direction, timbre). We contribute data-driven models for determining cross-modal mappings of visual features to audio and haptic features, and a computational approach to automatically generate audio-haptic feedback for objects in the user's environment. SonoHaptics provides global feedback that is unique to each object in the scene, and local feedback to amplify differences between nearby objects. Our comparative evaluation shows that SonoHaptics enables accurate object identification and selection in a cluttered scene without visual feedback.
Materials
Bibtex
@inproceedings {Cho2024SonoHaptics, author = {Cho, Hyunsung and Sendhilnathan, Naveen and Nebeling, Michael and Wang, Tianyi and Padmanabhan, Purnima and Browder, Jonathan and Lindlbauer, David and Jonker, Tanya R. and Todi, Kashyap}, title = {SonoHaptics: An Audio-Haptic Cursor for Gaze-Based Object Selection in XR}, year = {2024}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, doi = {10.1145/3654777.3676384}, keywords = {Extended Reality, Sonification, Haptics, Multimodal Feedback, Computational Interaction, Gaze-based Selection}, location = {Pittsburgh, PA, USA}, series = {UIST '24} }