Communication and Verification in LLM Agents towards Collaboration under Information Asymmetry

Abstract

While Large Language Model (LLM) agents are often approached from the angle of action planning/generation to accomplish a goal (e.g., given by language descriptions), their abilities to collaborate with each other to achieve a joint goal are not well explored. To address this limitation, this paper studies LLM agents in task collaboration, particularly under the condition of information asymmetry, where agents have disparities in their knowledge and skills and need to work together to complete a shared task. We extend Einstein Puzzles, a classical symbolic puzzle, to a table-top game. In this game, two LLM agents must reason, communicate, and act to satisfy spatial and relational constraints required to solve the puzzle. We apply a fine-tuning-plus-verifier framework in which LLM agents are equipped with various communication strategies and verification signals from the environment. Our empirical results show the critical role of communication, especially information seeking and information providing abilities in the success of collaboration. We also observe a performance drop between agents with mismatched communicative abilities, highlighting the importance of aligned communication protocols in multi-agent LLM systems. We further demonstrate the effectiveness and generalizability of the environment-based verifier, which can guide agents' decision making at inference time without additional training.

Publication
preprint (MAS @ ICML 2025)
Sikai Li
Sikai Li
Undergraduate Research Assistant
Yingzhuo Yu
Yingzhuo Yu
Undergraduate Research Assistant