Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards AI-Empowered Crowdsourcing (2212.14676v2)

Published 28 Dec 2022 in cs.CY and cs.AI

Abstract: Crowdsourcing, in which human intelligence and productivity is dynamically mobilized to tackle tasks too complex for automation alone to handle, has grown to be an important research topic and inspired new businesses (e.g., Uber, Airbnb). Over the years, crowdsourcing has morphed from providing a platform where workers and tasks can be matched up manually into one which leverages data-driven algorithmic management approaches powered by AI to achieve increasingly sophisticated optimization objectives. In this paper, we provide a survey presenting a unique systematic overview on how AI can empower crowdsourcing to improve its efficiency - which we refer to as AI-Empowered Crowdsourcing(AIEC). We propose a taxonomy which divides AIEC into three major areas: 1) task delegation, 2) motivating workers, and 3) quality control, focusing on the major objectives which need to be accomplished. We discuss the limitations and insights, and curate the challenges of doing research in each of these areas to highlight promising future research directions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (149)
  1. Crowdsourcing systems on the world-wide web, Communications of the ACM 54 (2011) 86–96.
  2. Working with machines: The impact of algorithmic and data-driven management on human workers, in: CHI, 2015, pp. 1603–1612.
  3. A survey of crowdsourcing systems, in: SocialCom, 2011.
  4. General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey, Journal of Systems and Software 167 (2020).
  5. Understanding crowdsourcing systems from a multiagent perspective and approach, ACM Transactions on Autonomous and Adaptive Systems 13 (2018) 8:1–8:32.
  6. Quality control in crowdsourcing systems: Issues and directions, IEEE Internet Computing 17 (2013) 76–81.
  7. A survey of incentive engineering for crowdsourcing, The Knowledge Engineering Review 33 (2018).
  8. A survey on task assignment in crowdsourcing, ACM Computing Surveys 55 (2022) 49:1–49:35.
  9. Brief survey of crowdsourcing for data mining, Expert Systems with Applications 41 (2014) 7987–7994.
  10. From one to crowd: A survey on crowdsourcing-based wireless indoor localization, Frontiers of Computer Science 12 (2018) 423–450.
  11. R. Cavallo, S. Jain, Efficient crowdsourcing contests, in: AAMAS, 2012, pp. 677–686.
  12. H. Heidari, M. Kearns, Depth-workload tradeoffs for workforce organization, in: HCOMP, 2013, pp. 60–68.
  13. A quality assuring multi-armed bandit crowdsourcing mechanism with incentive compatible learning, in: AAMAS, 2014, pp. 1609–1610.
  14. A deterministic mab mechanism for crowdsourcing with logarithmic regret and immediate payments, in: AAMAS, 2016, pp. 86–94.
  15. A truthful budget feasible multi-armed bandit mechanism for crowdsourcing time critical tasks, in: AAMAS, 2015, pp. 1101–1109.
  16. A reputation management approach for resource constrained trustee agents, in: IJCAI, 2013, pp. 418–424.
  17. Algorithmic management for improving collective productivity in crowdsourcing, Scientific Reports 7 (2017).
  18. Reputation-aware task allocation for human trustees, in: AAMAS, 2014, pp. 357–364.
  19. Efficient scheduling in crowdsourcing based on workers’ mood, in: ICA, 2017, pp. 121–126.
  20. Ethically aligned opportunistic scheduling for productive laziness, in: AIES, 2019.
  21. Novel mechanisms for online crowdsourcing with unreliable, strategic agents, in: AAAI, 2015, pp. 1256–1262.
  22. TRACCS: Trajectory-aware coordinated urban crowd-sourcing, in: HCOMP, 2014, pp. 30–40.
  23. Grbmc: An effective crowdsourcing recommendation for workers groups, Expert Systems with Applications 179 (2021) 115039.
  24. Crowdwt: Crowdsourcing via joint modeling of workers and tasks, ACM Transactions on Knowledge Discovery from Data (TKDD) 15 (2020) 1–24.
  25. Mechanism design for crowdsourcing markets with heterogeneous tasks, in: HCOMP, 2014, pp. 77–86.
  26. Online assignment of heterogeneous tasks in crowdsourcing markets, in: HCOMP, 2015, pp. 12–21.
  27. Friends-based crowdsourcing: Algorithms for task dissemination over social groups, The Computer Journal (2021).
  28. Parallel task routing for crowdsourcing, in: HCOMP, 2014, pp. 11–21.
  29. F. Tang, Optimal complex task assignment in service crowdsourcing., in: IJCAI, 2020, pp. 1563–1569.
  30. Complex crowdsourcing task allocation strategies employing supervised and reinforcement learning, IJCS (2017).
  31. Obtaining high-quality label by distinguishing between easy and hard items in crowdsourcing, in: IJCAI, 2017, pp. 2964–2970.
  32. A. Rangi, M. Franceschetti, Multi-armed bandit algorithms for crowdsourcing systems with online estimation of workers’ ability, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, International Foundation for Autonomous Agents and Multiagent Systems, 2018, pp. 1345–1352.
  33. Efficient collaborative crowdsourcing, in: AAAI, 2016, pp. 4248–4249.
  34. Multi-worker multi-task selection framework in mobile crowd sourcing, Journal of Network and Computer Applications 130 (2019) 52–62.
  35. Preference-aware group task assignment in spatial crowdsourcing: A mutual information-based approach, in: 2021 IEEE International Conference on Data Mining (ICDM), IEEE, 2021, pp. 350–359.
  36. Swill-tac: Skill-oriented dynamic task allocation with willingness for complex job in crowdsourcing, in: 2021 IEEE Global Communications Conference (GLOBECOM), IEEE, 2021, pp. 1–6.
  37. Q. Kang, W. P. Tay, Task recommendation in crowdsourcing based on learning preferences and reliabilities, IEEE Transactions on Services Computing (2020).
  38. Group task allocation approach for heterogeneous software crowdsourcing tasks, Peer-to-Peer Networking and Applications 14 (2021) 1736–1747.
  39. S. Nath, B. M. Narayanaswamy, Productive output in hierarchical crowdsourcing, in: AAMAS, 2014, pp. 469–476.
  40. E. Kamar, E. Horvitz, Planning for crowdsourcing hierarchical tasks, in: AAMAS, 2015, pp. 1191–1199.
  41. Efficient task sub-delegation for crowdsourcing, in: AAAI, 2015, pp. 1305–1311.
  42. Mitigating herding in hierarchical crowdsourcing networks, Scientific Reports 6 (2016).
  43. Dynamic allocation for complex mobile crowdsourcing task with internal dependencies, in: 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), IEEE, 2019, pp. 818–825.
  44. Crowdsourcing complex workflows under budget constraints, in: AAAI, 2015, pp. 1298–1304.
  45. Task allocation in dependency-aware spatial crowdsourcing, in: 2020 IEEE 36th International Conference on Data Engineering (ICDE), IEEE, 2020, pp. 985–996.
  46. Multi-stage complex task assignment in spatial crowdsourcing, Information Sciences 586 (2022) 119–139.
  47. CrowdWON: A modelling language for crowd processes based on workflow nets, in: AAAI, 2015, pp. 1284–1290.
  48. Understanding crowdsourcing workflow: Modeling and optimizing iterative and parallel processes, in: HCOMP, 2016, pp. 52–58.
  49. An end-to-end deep rl framework for task arrangement in crowdsourcing platforms, in: 2020 IEEE 36th International Conference on Data Engineering (ICDE), IEEE, 2020, pp. 49–60.
  50. Analysing volunteer engagement in humanitarian mapping: building contributor communities at large scale, in: CSCW, 2016, pp. 108–118.
  51. Improving your chances: Boosting citizen science discovery, in: HCOMP, 2013, pp. 198–206.
  52. A. Elmalech, B. J. Grosz, “But you promised”: Methods to improve crowd engagement in non-ground truth tasks, in: HCOMP, 2017, pp. 21–30.
  53. Intervention strategies for increasing engagement in crowdsourcing: Platform, predictions, and experiments, in: IJCAI, 2016, pp. 3861–3867.
  54. Quality and budget aware task allocation for spatial crowdsourcing, in: AAMAS, 2015, pp. 1689–1690.
  55. Z. Hu, J. Zhang, Optimal posted-price mechanism in microtask crowdsourcing, in: IJCAI, 2017, pp. 228–234.
  56. An incentive mechanism with privacy protection in mobile crowdsourcing systems, Computer Networks 102 (2016) 157–171.
  57. Truthful online double auctions for dynamic mobile crowdsourcing, in: INFOCOM, 2015, pp. 2074–2082.
  58. Truthful online double auctions for mobile crowdsourcing: An on-demand service strategy, IEEE Internet of Things Journal (2022).
  59. PISCES: Participatory incentive strategies for effective community engagement in smart cities, in: HCOMP, 2015, pp. 22–31.
  60. Surge pricing moves uber’s driver-partners, in: Proceedings of the 2018 ACM Conference on Economics and Computation, 2018, pp. 3–3.
  61. G. Radanovic, B. Faltings, Learning to scale payments in crowdsourcing with PropeRBoost, in: HCOMP, 2016, pp. 179–188.
  62. Prices and subsidies in the sharing economy, in: WWW, 2017.
  63. A socially-aware incentive mechanism for mobile crowdsensing service market, in: GLOBECOM, IEEE, 2018, pp. 1–7.
  64. Finish them on the fly: An incentive mechanism for real-time spatial crowdsourcing, in: DASFAA, Springer, 2020, pp. 694–710.
  65. Novel mechanisms for online crowdsourcing with unreliable, strategic agents, in: AAAI, 2015.
  66. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement, in: HCOMP, 2014.
  67. M. Yin, Y. Chen, Bonus or not? learn to reward in crowdsourcing, in: IJCAI, 2015, pp. 201–207.
  68. Y. Zhang, M. Van der Schaar, Reputation-based incentive protocols in crowdsourcing applications, in: INFOCOM, 2012, pp. 2140–2148.
  69. Reputation-based incentive mechanisms in crowdsourcing, Journal of Electronics & Information Technology 38 (2016) 1808–1815.
  70. Balancing quality and budget considerations in mobile crowdsourcing, Decision Support Systems 90 (2016) 56–64.
  71. A blockchain-based hybrid incentive model for crowdsensing, Electronics 9 (2020) 215.
  72. Y. Lv, T. Moscibroda, Incentive networks, in: AAAI, 2015, pp. 1270–1276.
  73. Exploiting social trust assisted reciprocity (star) toward utility-optimal socially-aware crowdsensing, IEEE Transactions on Signal and Information Processing over Networks 1 (2015) 195–208.
  74. A stackelberg game approach toward socially-aware incentive mechanisms for mobile crowdsensing, IEEE Transactions on Wireless Communications 18 (2018) 724–738.
  75. A multi-leader multi-follower game-based analysis for incentive mechanisms in socially-aware mobile crowdsensing, IEEE Transactions on Wireless Communications 20 (2020) 1457–1471.
  76. Socialrecruiter: Dynamic incentive mechanism for mobile crowdsourcing worker recruitment with social networks, IEEE Transactions on Mobile Computing 20 (2020) 2055–2066.
  77. Incentive mechanism for spatial crowdsourcing with unknown social-aware workers: A three-stage stackelberg game approach, IEEE Transactions on Mobile Computing (2022).
  78. Using truth detection to incentivize workers in mobile crowdsourcing, IEEE transactions on mobile computing (2020).
  79. Incentive mechanisms for crowdsourcing platforms, in: ICIS, Springer, 2016, pp. 3–18.
  80. Spatial crowdsourcing: a survey, The VLDB Journal 29 (2020) 217–250.
  81. R. M. Araujo, 99designs: An analysis of creative competition in crowdsourced design, in: HCOMP, 2013, pp. 17–24.
  82. J. T. Jacques, P. O. Kristensson, Crowdsourcing a hit: Measuring workers’ pre-task interactions on microtask markets, in: HCOMP, 2013, pp. 86–93.
  83. K. Ikeda, M. S. Bernstein, Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity, in: CHI, 2016, pp. 4111–4121.
  84. R. M. Borromeo, M. Toyama, An investigation of unpaid crowdsourcing, Human-centric Computing and Information Sciences 6 (2016) 11.
  85. The motivations, enablers and barriers for voluntary participation in an online crowdsourcing platform, Computers in Human Behavior 64 (2016) 923–931.
  86. Motivating multi-generational crowd workers in social-purpose work, in: CSCW, 2015, pp. 1813–1824.
  87. Ethics and tactics of professional crowdwork, ACM Magazine for Students 17 (2010) 39–43.
  88. General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey, Journal of Systems and Software 167 (2020) 110611.
  89. Quality-aware incentive mechanisms under social influences in data crowdsourcing, IEEE/ACM Transactions on Networking 30 (2021) 176–189.
  90. Learning from crowdsourced labeled data: a survey, Artificial Intelligence Review 46 (2016) 1–34.
  91. A technical survey on statistical modelling and design methods for crowdsourcing quality control, Artificial Intelligence 287 (2020) 103351.
  92. Crowd science and engineering: concept and research framework, IJCS (2017).
  93. M.-H. Wu, A. J. Quinn, Confusing the crowd: Task instruction quality on Amazon Mechanical Turk, in: HCOMP, 2017, pp. 206–215.
  94. The future of crowd work, in: CSCW, 2013, pp. 1301–1318.
  95. Leveraging crowdsourcing to detect improper tasks in crowdsourcing marketplaces, in: AAAI, 2013, pp. 1487–1492.
  96. The daemo crowdsourcing marketplace, in: CSCW, 2017, pp. 1–4.
  97. V. C. Manam, A. J. Quinn, Wingit: Efficient refinement of unclear task instructions, in: HCOMP, 2018.
  98. Taskmate: A mechanism to improve the quality of instructions in crowdsourcing, in: Companion Proceedings of The 2019 World Wide Web Conference, 2019, pp. 1121–1130.
  99. Whom to ask? jury selection for decision making tasks on micro-blog services, Proceedings of the VLDB Endowment 5 (2012).
  100. Crowdsourcing for multiple-choice question answering, in: AAAI, 2014, pp. 2946–2953.
  101. A probabilistic graphical model for analyzing the subjective visual quality assessment data from crowdsourcing, in: MM, 2020, pp. 3339–3347.
  102. Truth inference on sparse crowdsourcing data with local differential privacy, in: BigData, IEEE, 2018, pp. 488–497.
  103. Multi-label answer aggregation based on joint matrix factorization, in: ICDM, IEEE, 2018, pp. 517–526.
  104. Truth inference at scale: A bayesian model for adjudicating highly redundant crowd annotations, in: WWW, 2019, pp. 1028–1038.
  105. A. P. Dawid, A. M. Skene, Maximum likelihood estimation of observer error-rates using the em algorithm, Journal of the Royal Statistical Society: Series C (Applied Statistics) 28 (1979) 20–28.
  106. Community-based bayesian aggregation models for crowdsourcing, in: WWW, 2014, pp. 155–164.
  107. Analysis of minimax error rate for crowdsourcing and its application to worker clustering model, in: ICML, 2018, pp. 2147–2156.
  108. Crowdsourcing with self-paced workers, in: 2021 IEEE International Conference on Data Mining (ICDM), IEEE, 2021, pp. 280–289.
  109. Who said what: Modeling individual labelers improves classification, in: AAAI, 2018.
  110. Leveraging crowdsourcing data for deep active learning an application: Learning intents in alexa, in: WWW, 2018, pp. 23–32.
  111. Sembler: Ensembling crowd sequential labeling for improved quality, in: AAAI, 2012, pp. 1713–1719.
  112. Accurate integration of crowdsourced labels using workers’ self-reported confidence scores, in: IJCAI, 2013, pp. 2554–2560.
  113. Trust-based fusion of untrustworthy information in crowdsourcing applications, in: AAMAS, 2013, pp. 829–836.
  114. Reputation-based worker filtering in crowdsourcing, in: NeurIPS, 2014, pp. 2492–2500.
  115. H. Xu, K. Larson, Improving the efficiency of crowdsourcing contests, in: AAMAS, 2014, pp. 461–468.
  116. Bfcri: A blockchain-based framework for crowdsourcing with reputation and incentive, IEEE Transactions on Cloud Computing (2022).
  117. Sybil defense in crowdsourcing platforms, in: CIKM, 2017, pp. 1529–1538.
  118. Truth discovery against strategic sybil attack in crowdsourcing, in: KDD, 2020, pp. 95–104.
  119. Defending against sybil devices in crowdsourced mapping services, in: MobiSys, 2016, pp. 179–191.
  120. J. James, Sybil attack identification for crowdsourced navigation: A self-supervised deep learning approach, IEEE Transactions on Intelligent Transportation Systems 22 (2020) 4622–4634.
  121. A spam worker detection approach based on heterogeneous network embedding in crowdsourcing platforms, Computer Networks 183 (2020) 107587.
  122. A misreport-and collusion-proof crowdsourcing mechanism without quality verification, IEEE Transactions on Mobile Computing (2021).
  123. Dynamic estimation of worker reliability in crowdsourcing for regression tasks: Making it work, Expert Systems with Applications 41 (2014) 6190–6210.
  124. Bee colony based worker reliability estimation algorithm in microtask crowdsourcing, in: ICMLA, 2016, pp. 713–717.
  125. Learning deep networks with crowdsourcing for relevance evaluation, EURASIP Journal on Wireless Communications and Networking 2020 (2020) 1–11.
  126. A social mechanism for task-oriented crowdsourcing recommendations, Decision Support Systems 141 (2021) 113449.
  127. Improving learning-from-crowds through expert validation, in: IJCAI, 2017, pp. 2329–2336.
  128. Quality assessment in crowdsourced classification tasks, IJCS (2019).
  129. Y. Baba, H. Kashima, Statistical quality estimation for general crowdsourcing tasks, in: KDD, 2013, pp. 554–562.
  130. Cross-task crowdsourcing, in: KDD, 2013, pp. 677–685.
  131. Improving label quality in crowdsourcing using noise correction, in: CIKM, 2015, pp. 1931–1934.
  132. Improving data and model quality in crowdsourcing using cross-entropy-based noise correction, Information Sciences 546 (2021) 803–814.
  133. Frog: A fast and reliable crowdsourcing framework, IEEE Transactions on Knowledge and Data Engineering 31 (2018) 894–908.
  134. T. Tian, J. Zhu, Max-margin majority voting for learning from crowds, in: NeurIPS, 2015, pp. 1621–1629.
  135. Pairwise HITS: Quality estimation from pairwise comparisons in creator-evaluator crowdsourcing process, in: AAAI, 2017, pp. 977–983.
  136. Trust-aware sensing quality estimation for team crowdsourcing in social iot, Computer Networks 184 (2021) 107695.
  137. Qasca: A quality-aware task assignment system for crowdsourcing applications, in: Proceedings of the 2015 ACM SIGMOD international conference on management of data, 2015, pp. 1031–1046.
  138. J. Heer, M. Bostock, Crowdsourcing graphical perception: using mechanical turk to assess visualization design, in: CHI, 2010, pp. 203–212.
  139. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy, in: CIKM, 2012, pp. 2583–2586.
  140. Quality-control mechanism utilizing worker’s confidence for crowdsourced tasks, in: AAMAS, 2013, pp. 1347–1348.
  141. Crowdsourcing system for numerical tasks based on latent topic aware worker reliability, in: INFOCOM, 2021, pp. 1–10.
  142. Eliciting joint truthful answers and profiles from strategic workers in mobile crowdsourcing systems, IEEE Transactions on Mobile Computing (2022).
  143. Federated crowdsensing: Framework and challenges, arXiv preprint arXiv:2011.03208 (2020).
  144. FedWM: Federated crowdsourcing workforce management service for productive laziness, in: ICWS, 2023.
  145. Context-based and explainable decision making with argumentation, in: AAMAS, 2018.
  146. An adaptive computational model for personalized persuasion, in: IJCAI, 2015, pp. 61–67.
  147. Building ethics into artificial intelligence, in: IJCAI, 2018, pp. 5527–5533.
  148. Fairness in design: A framework for facilitating ethical ai designs, IJCS (2022).
  149. J. Zhang, H. Yu, Eid: Facilitating explainable ai design discussions in team-based settings, IJCS (2022).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shipeng Wang (9 papers)
  2. Qingzhong Li (2 papers)
  3. Lizhen Cui (66 papers)
  4. Zhongmin Yan (3 papers)
  5. Yonghui Xu (13 papers)
  6. Zhuan Shi (8 papers)
  7. Xinping Min (1 paper)
  8. Zhiqi Shen (62 papers)
  9. Han Yu (218 papers)

Summary

We haven't generated a summary for this paper yet.