Abstract
In this paper, we propose a method for selecting adequate additional instruction for workers in microtask crowdsourcing. In this context, “adequate instructions” means that workers will appropriately do the task if the workers read and understand the instructions. Instructions for workers should be easy to read and understand, but the requesters are hard to recognize whether the instructions are adequate or not. Therefore, we propose a method for measuring the ade-quateness of instructions using machine learning. We assume a labeling task for tweets. In our method, we construct the machine-learning model for each worker, which labels similar to the worker, which we call a “Virtual Worker.” Then, the machine-learning model labels do the tasks for all tweets. Finally, we calculate the kappa-values of this labeling. We assume that if an instruction is inadequate, the kappa values should be high. In our experiments, we found that our method can select adequate additional instruction.