Abstract
When leveraging the crowd to perform complex tasks, it is imperative to identify the most effective worker for a particular job. Demographic profiles provided by workers, skill self-assessments by workers, and past performance as captured by employers all represent viable data points available within labor markets. Employers often question the validity of a worker's self-assessment of skills and expertise level when selecting workers in context of other information. More specifically, employers would like to answer the question, "Is worker confidence a predictor of quality?" In this paper, we discuss the state-of-the-art in recommending crowd workers based on assessment information. A major contribution of our work is an architecture, platform, and push/pull process for categorizing and recommending workers based on available self-assessment information. We present a study exploring the validity of skills input by workers in light of their actual performance and other metrics captured by employers. A further contribution of this approach is the extrapolation of a body of workers to describe the nature of the community more broadly. Through experimentation, within the language-processing domain, we demonstrate a new capability of deriving trends that might help future employers to select appropriate workers.