Document Type

Conference Paper


This item is available under a Creative Commons License for non-commercial use only


Computer Sciences

Publication Details

In Proceedings of the 1st Annual Conference on Human-Robot Interaction (HRI'06). Salt Lake City UT, USA.


The paper presents an approach to using structural descriptions, obtained through a human-robot tutoring dialogue, as labels for the visual ob ject models a robot learns. The paper shows how structural descriptions can relate models for different aspects of one and the same object, and how relating descriptions for visual models and discourse referents enables incremental updating of model descriptions through dialogue (either robot- or human-initiated). The approach has been implemented in an integrated architecture for human-assisted robot visual learning.