1

Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties

A major reason behind the recent success of large language models (LLMs) is their in-context learning capability, which makes it possible to rapidly adapt them to downstream text-based tasks by prompting them with a small number of relevant …

Multi-Object Hallucination in Vision-Language Models

Large vision language models (LVLMs) often suffer from object hallucination, producing objects not present in the given images. While current benchmarks for object hallucination primarily concentrate on the presence of a single object class rather …

LinkGPT: Teaching Large Language Models To Predict Missing Links

Large Language Models (LLMs) have shown promising results on various language and vision tasks. Recently, there has been growing interest in applying LLMs to graph-based tasks, particularly on Text-Attributed Graphs (TAGs). However, most studies have …

GROUNDHOG: Grounding Large Language Models to Holistic Segmentation

Most multimodal large language models (MLLMs) learn language-to-object grounding through causal language modeling where grounded objects are captured by bounding boxes as sequences of location tokens. This paradigm lacks pixel-level representations …

Inversion-Free Image Editing with Language-Guided Diffusion Models

Despite recent advances in inversion-based editing, text-guided image manipulation remains challenging for diffusion models. The primary bottlenecks include 1) the time-consuming nature of the inversion process; 2) the struggle to balance consistency …

DriVLMe: Enhancing LLM-based Autonomous Driving Agents with Embodied and Social Experiences

Recent advancements in foundation models (FMs) have unlocked new prospects in autonomous driving, yet the experimental settings of these studies are preliminary, over-simplified, and fail to capture the complexity of real-world driving scenarios in …

LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

3D visual grounding is a critical skill for household robots, enabling them to navigate, manipulate objects, and answer questions based on their environment. While existing approaches often rely on extensive labeled data or exhibit limitations in …

Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation

Zero-Shot Object Navigation (ZSON) enables agents to navigate towards open-vocabulary objects in unknown environments. The existing works of ZSON mainly focus on following individual instructions to find generic object classes, neglecting the …

User Modeling Challenges in Interactive AI Assistant Systems

.

GIPCOL: Graph-Injected Soft Prompting for Compositional Zero-Shot Learning

Pre-trained vision-language models (VLMs) have achieved promising success in many fields, especially with prompt learning paradigm. In this work, we propose GIP-COL (Graph-Injected Soft Prompting for COmpositional Learning) to better explore the …