Collaborative Models in Grounded Language Processing in Situated Dialogue

Motivations and Objectives

Our perception of the environment often leads to the use of imprecise language, e.g., a tall building, a small cup, a car to the left, etc. While processing this kind of imprecise language may be less problematic to humans, interpreting imprecise language can be challenging to automated agents, especially in situated interaction. Although an artificial agent (e.g., robot) and its human partner are co-present in a shared environment, they have significantly mismatched perceptual capabilities (e.g., recognizing objects in the surroundings). Their knowledge and representation of the shared world are significantly different. When a shared perceptual basis is missing, grounding references, especially those vague language descriptions to the environment will be difficult. Therefore, a foremost question is to understand how partners with mismatched perceptual capabilities collaborate with each other to achieve referential grounding. This project has developed a simulated environment to address this question.

asd