dc.contributor.authorMo, Jiahui
dc.contributor.authorSarkar, Sumit
dc.contributor.authorMenon, Syam
dc.date.accessioned2018-09-26T07:01:10Z
dc.date.available2018-09-26T07:01:10Z
dc.date.issued2018
dc.identifier.citationMo, J., Sarkar, S., & Menon, S. (2018). Know When to Run: Recommendations in Crowdsourcing Contests. MIS Quarterly, 42(3), 919-944. doi:10.25300/MISQ/2018/14103en_US
dc.identifier.issn0276-7783en_US
dc.identifier.urihttp://hdl.handle.net/10220/46105
dc.description.abstractCrowdsourcing contests have emerged as an innovative way for firms to solve business problems by acquiring ideas from participants external to the firm. As the number of participants on crowdsourcing contest platforms has increased, so has the number of tasks that are open at any time. This has made it difficult for solvers to identify tasks in which to participate. We present a framework to recommend tasks to solvers who wish to participate in crowdsourcing contests. The existence of competition among solvers is an important and unique aspect of this environment, and our framework considers the competition a solver would face in each open task. As winning a task depends on performance, we identify a theory of performance and reinforce it with theories from learning, motivation, and tournaments. This augmented theory of performance guides us to variables specific to crowdsourcing contests that could impact a solver’s winning probability. We use these variables as input into various probability prediction models adapted to our context, and make recommendations based on the probability or the expected payoff of the solver winning an open task. We validate our framework using data from a real crowdsourcing platform. The recommender system is shown to have the potential of improving the success rates of solvers across all abilities. Recommendations have to be made for open tasks and we find that the relative rankings of tasks at similar stages of their time lines remain remarkably consistent when the tasks close. Further, we show that deploying such a system should benefit not only the solvers, but also the seekers and the platform itself.en_US
dc.format.extent26 p.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesMIS Quarterlyen_US
dc.rights© 2018 Management Information Systems Research Center. This paper was published in MIS Quarterly and is made available as an electronic reprint (preprint) with permission of Management Information Systems Research Center. The published version is available at: [http://dx.doi.org/10.25300/MISQ/2018/14103]. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited and is subject to penalties under law.en_US
dc.subjectCompetitionen_US
dc.subjectPerformanceen_US
dc.subjectDRNTU::Business::Generalen_US
dc.titleKnow when to run : recommendations in crowdsourcing contestsen_US
dc.typeJournal Article
dc.contributor.schoolCollege of Business (Nanyang Business School)en_US
dc.identifier.doihttp://dx.doi.org/10.25300/MISQ/2018/14103
dc.description.versionPublished versionen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record