Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties

Abstract

A major reason behind the recent success of large language models (LLMs) is their in-context learning capability, which makes it possible to rapidly adapt them to downstream text-based tasks by prompting them with a small number of relevant demonstrations. While large vision-language models (VLMs) have recently been developed for tasks requiring both text and images, they largely lack in-context learning over visual information, especially in understanding and generating text about videos. In this work, we implement Emergent In-context Learning on Videos (EILeV), a novel training paradigm that induces in-context learning over video and text by capturing key properties of pre-training data found by prior work to be essential for in-context learning in transformers. In our experiments, we show that EILeV-trained models outperform other off-the-shelf VLMs in few-shot video narration for novel, rare actions. Furthermore, we demonstrate that these key properties of bursty distributions, skewed marginal distributions, and dynamic meaning each contribute to varying degrees to VLMs' in-context learning capability in narrating procedural videos. Our results, analysis, and EILeV-trained models yield numerous insights about the emergence of in-context learning over video and text, creating a foundation for future work to optimize and scale VLMs for open-domain video understanding and reasoning.

Publication
EMNLP
Zheyuan "Brian" Zhang
Zheyuan "Brian" Zhang
Graduate Research Assistant
Fengyuan Hu
Fengyuan Hu
Undergraduate Research Assistant
Shane Storks
Shane Storks
Ph.D. Candidate