2017 Second International Conference on Information Systems Engineering (ICISE)
Download PDF

Abstract

Crowd sourcing is a novel method for requirements elicitation, development, testing, and evaluation of software in a dynamic environment. Crowd sourced evaluation is a new technique to overcome the traditional methods where usually the co-presence of stakeholders is required during software quality evaluation. Learning using mobile devices takes place in a dynamic and heterogeneous environment where learners have their own learning styles and need. The distinguish characteristics of Mobile Learning (M-Learning) is accessibility at anyplace, anytime, and by anyone. Therefore evaluation of M-Learning application quality using traditional techniques such as interview is tedious, costly, and time consuming. Hence in this article Crowd sourced evaluation process for M-Learning application quality has been proposed. There are four important steps: analysis and classification of M-Learning application, defining customized quality standard based on the M-Learning application category, categorization of Crowd such as users and application development team, and lastly use of Crowd sourcing platforms. The main idea behind the process is the involvement of end users in the assessment of M-Learning application quality rather than merely inviting experts to evaluate the application. The proposed approach is theoretical in nature and is based on the finding from the existing literature on Crowd sourcing, M-Learning, evaluation of M-Learning application, and software quality. The proposed approach would be implemented in future to test the feasibility in real situation.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles