Can Foundation Models Watch, Talk and Guide You Step by Step to Make a Cake?

Abstract

Despite tremendous advances in AI, it remains a significant challenge to develop interactive task guidance systems that can offer situated, personalized guidance and assist humans in various tasks. These systems need to have a sophisticated understanding of the user as well as the environment, and make timely accurate decisions on when and what to say. To address this issue, we created a new multimodal benchmark dataset, Watch, Talk and Guide (WTaG) based on natural interaction between a human user and a human instructor. We further proposed two tasks: User and Environment Understanding, and Instructor Decision Making. We leveraged several foundation models to study to what extent these models can be quickly adapted to perceptually enabled task guidance. Our quantitative, qualitative, and human evaluation results show that these models can demonstrate fair performances in some cases with no task-specific training, but a fast and reliable adaptation remains a significant challenge. Our benchmark and baselines will provide a stepping stone for future work on situated task guidance.

Publication
Findings of EMNLP
Yuwei Bao
Yuwei Bao
Ph.D. Candidate
Yichi Zhang
Yichi Zhang
Ph.D. Candidate
Shane Storks
Shane Storks
Ph.D. Candidate
Alexander De La Iglesia
Alexander De La Iglesia
Undergraduate Research Assistant
Megan Su
Megan Su
Undergraduate Research Assistant