Document Type

Conference Paper

Rights

Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Disciplines

1.2 COMPUTER AND INFORMATION SCIENCE

Publication Details

In Proceedings of the AAAI Symposium on Dialog with Robots, Arlington, Virginia, USA. 11th - 13th Nov 2010.

Abstract

Dialogues between humans and robots are necessarily situated and so, often, a shared visual context is present. Exophoric references are very frequent in situated dialogues, and are particularly important in the presence of a shared visual context - for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The results of our evaluation show that our computationally lightweight approach is successful, and so promising for use in human-robot dialogue systems.

DOI

https://doi.org/10.21427/D7189P


Share

COinS