Computer Vision, Pattern Recognition, Image Processing and Graphics, National Conference on
Download PDF

Abstract

With the advent of crowd sourcing services it has become quite cheap and reasonably effective to get a dataset labeled by multiple annotators in a short amount of time. Various methods have been proposed to estimate the consensus labels by correcting for the bias of annotators with different kinds of expertise. Often we have low quality annotators or spammers -- annotators who assign labels randomly (e.g., without actually looking at the instance). Spammers can make the cost of acquiring labels very expensive and can potentially degrade the quality of the consensus labels. In this paper we propose a score (based on the reduction in entropy) which can be used to rank the annotators -- with the spammers having a score close to zero and the good annotators having a high score close to one.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles