Document Type

Theses, Ph.D


This item is available under a Creative Commons License for non-commercial use only



Publication Details

Thesis submitted for the degree of Doctor of Philosophy to Dublin Institute of Technology, School of Computing, January 2016.


We investigate the effect of sensor errors on situated human­ computer dialogues. If a human user instructs a robot to perform a task in a spatial environment, errors in the robot's sensor based perception of the environment may result in divergences between the user's and the robot's understanding of the environment. If the user and the robot communicate through a language based interface, these problems may result in complex misunderstand­ ings. In this work we investigate such situations. We set up a simulation based scenario in which a human user instructs a robot to perform a series of manipulation tasks, such as lifting, moving and re-arranging simple objects. We induce errors into the robot's perception, such as misclassification of shapes and colours, and record and analyse the user's attempts to resolve the problems. We evaluate a set of methods to alleviate the problems by allowing the operator to access the robot's understanding of the scene. We investigate a uni-directional language based option, which is based on automatically generated scene descriptions, a visually based option, in which the system highlights objects and provides known properties, and a dialogue based assistance option. In this option the participant can simple questions about the robot's perception of the scene. As a baseline condition we perform the experiment without introducing any errors. We evaluate and compare the success and problems in all four conditions. We identify and compare strategies the participants used in each condition. We find that the participants appreciate and use the information request options successfully. We find that that all options provide an improvement over the condition without information.

We conclude that allowing the participants to access information about the robot's perception state is an effective way to resolve problems in the dialogue.