HandAvatar: Embodying Non-Humanoid Virtual Avatars through Hands

Yu Jiang*, Zhipeng Li*, Mufei He, David Lindlbauer, Yukang Yan.
Published at ACM CHI 2023
Teaser image

Abstract

We propose HandAvatar to enable users to embody non-humanoid avatars using their hands. HandAvatar leverages the high dexterity and coordination of users' hands to control virtual avatars, enabled through our novel approach for automatically-generated joint-to-joint mappings. We contribute an observation study to understand users’ preferences on hand-to-avatar mappings on eight avatars. Leveraging insights from the study, we present an automated approach that generates mappings between users' hands and arbitrary virtual avatars by jointly optimizing control precision, structural similarity, and comfort. We evaluated HandAvatar on static posing, dynamic animation, and creative exploration tasks. Results indicate that HandAvatar enables more precise control, requires less physical effort, and brings comparable embodiment compared to a state-of-the-art body-to-avatar control method. We demonstrate HandAvatar's potential with applications including non-humanoid avatar based social interaction in VR, 3D animation composition, and VR scene design with physical proxies. We believe that HandAvatar unlocks new interaction opportunities, especially for usage in Virtual Reality, by letting users become the avatar in applications including virtual social interaction, animation, gaming, or education.

Materials

Bibtex

@inproceedings {Jiang23, 
 author = {Jiang, Yu and Li, Zhipeng and He, Mufei and Lindlbauer, David and Yan, Yukang}, 
 title = {HandAvatar: Embodying Non-Humanoid Virtual Avatars through Hands}, 
 year = {2023}, 
 publisher = {Association for Computing Machinery}, 
 address = {New York, NY, USA}, 
 url = {https://doi.org/10.1145/3544548.3581027}, 
 doi = {10.1145/3544548.3581027}, 
 keywords = {virtual avatar, embodiment, Mixed Reality, gestural interaction}, 
 location = {Hamburg, Germany}, 
 series = {CHI '23} 
 }