Document Type

Conference Paper

Rights

Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Disciplines

Computer Sciences

Publication Details

Published in the Proceedings of 2012 ASE/IEEE International Conference on Social Computing, Amsterdam (The Netherlands), September 3-6. Presented at the doctoral consortium and also as a poster.

Abstract

Many application areas that use supervised machine learning make use of multiple raters to collect target ratings for training data. Usage of multiple raters, however, inevitably introduces the risk that a proportion of them will be unreliable. The presence of unreliable raters can prolong the rating process, make it more expensive and lead to inaccurate ratings. The dominant, "static" approach of solving this problem in state-of-the-art research is to estimate the rater reliability and to calculate the target ratings when all ratings have been gathered. However, doing it dynamically while raters rate training data can make the acquisition of ratings faster and cheaper compared to static techniques. We propose to cast the problem of the dynamic estimation of rater reliability as a multi-armed bandit problem. Experiments show that the usage of multi-armed bandits for this problem is worthwhile, providing that each rater can rate any asset when asked. The purpose of this paper is to outline the directions of future research in this area.

DOI

https://doi.org/10.1109/SocialCom-PASSAT.2012.50

Funder

Science Foundation Ireland


Share

COinS