Persistent Assistant: Seamless Everyday AI Interactions via Intent Grounding and Multimodal Feedback
Hyunsung Cho,
Jacqui Fashimpaur,
Naveen Sendhilnathan,
Jonathan Browder,
David Lindlbauer,
Tanya R. Jonker,
Kashyap Todi.
Published at
ACM CHI
2025

Abstract
Current AI assistants predominantly use natural language interactions, which can be time-consuming and cognitively demanding, especially for frequent, repetitive tasks in daily life. We propose Persistent Assistant, a framework for seamless and unobtrusive interactions with AI assistants. The framework has three key functionalities: (1) efficient intent specification through grounded interactions, (2) seamless target referencing through embodied input, and (3) intuitive response comprehension through multimodal perceptible feedback. We developed a proof-of-concept system for everyday decision-making tasks, where users can easily repeat queries over multiple objects using eye gaze and pinch gesture, as well as receiving multimodal haptic and speech feedback. Our study shows that multimodal feedback enhances user experience and preference by reducing physical demand, increasing perceived speed, and enabling intuitive and instinctive human-AI assistant interaction. We discuss how our framework can be applied to build seamless and unobtrusive AI assistants for everyday persistent tasks.
Materials
Bibtex
@inproceedings {Cho25PersistentAssistant, author = {Cho, Hyunsung and Fashimpaur, Jacqui and Sendhilnathan, Naveen and Browder, Jonathan and Lindlbauer, David and Jonker, Tanya R. and Todi, Kashyap}, title = {Persistent Assistant: Seamless Everyday AI Interactions via Intent Grounding and Multimodal Feedback}, year = {2025}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, keywords = {Wearable AI assistants, grounding, multimodal interaction, gaze and gesture input, haptic and speech feedback}, location = {Yokohama, Japan}, series = {CHI '25} }