Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The State of Pilot Study Reporting in Crowdsourcing: A Reflection on Best Practices and Guidelines (2312.08090v1)

Published 13 Dec 2023 in cs.HC and cs.CY

Abstract: Pilot studies are an essential cornerstone of the design of crowdsourcing campaigns, yet they are often only mentioned in passing in the scholarly literature. A lack of details surrounding pilot studies in crowdsourcing research hinders the replication of studies and the reproduction of findings, stalling potential scientific advances. We conducted a systematic literature review on the current state of pilot study reporting at the intersection of crowdsourcing and HCI research. Our review of ten years of literature included 171 articles published in the proceedings of the Conference on Human Computation and Crowdsourcing (AAAI HCOMP) and the ACM Digital Library. We found that pilot studies in crowdsourcing research (i.e., crowd pilot studies) are often under-reported in the literature. Important details, such as the number of workers and rewards to workers, are often not reported. On the basis of our findings, we reflect on the current state of practice and formulate a set of best practice guidelines for reporting crowd pilot studies in crowdsourcing research. We also provide implications for the design of crowdsourcing platforms and make practical suggestions for supporting crowd pilot study reporting.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (246)
  1. Scaling Crowdsourcing with Mobile Workforce: A Case Study with Belgian Postal Service. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 2, Article 35 (2019), 32 pages. https://doi.org/10.1145/3328906
  2. A Novel Approach to Big Data Veracity Using Crowdsourcing Techniques and Bayesian Predictors. In Proceedings of the 9th Annual ACM India Conference (COMPUTE ’16). ACM, New York, NY, USA, 153–160. https://doi.org/10.1145/2998476.2998498
  3. On Leveraging Crowdsourced Data for Automatic Perceived Stress Detection. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI ’16). ACM, New York, NY, USA, 113–120. https://doi.org/10.1145/2993148.2993200
  4. Alan Aipe and Ujwal Gadiraju. 2018. SimilarHITs: Revealing the Role of Task Similarity in Microtask Crowdsourcing. In Proceedings of the 29th on Hypertext and Social Media (HT ’18). ACM, New York, NY, USA, 115–122. https://doi.org/10.1145/3209542.3209558
  5. Crowdsourcing vs Laboratory-Style Social Acceptability Studies? Examining the Social Acceptability of Spatial User Interactions for Head-Worn Displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–7. https://doi.org/10.1145/3173574.3173884
  6. Omar Alonso. 2009. Guidelines for Designing Crowdsourcing-based Relevance Experiments.
  7. Are Some Tweets More Interesting Than Others? #HardQuestion. In Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval (HCIR ’13). ACM, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/2528394.2528396
  8. Debugging a Crowdsourced Task with Low Inter-Rater Agreement. In Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL ’15). ACM, New York, NY, USA, 101–110. https://doi.org/10.1145/2756406.2757741
  9. Privacy-Preserving Face Redaction Using Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8, 1 (2020), 13–22. https://doi.org/10.1609/hcomp.v8i1.7459
  10. Assessing the Quality of Sources in Wikidata Across Languages: A Hybrid Approach. J. Data and Information Quality 13, 4, Article 23 (2021), 35 pages. https://doi.org/10.1145/3484828
  11. Protection and Preservation of Campania Cultural Heritage Engaging Local Communities via the Use of Open Data. In Proceedings of the 19th Annual International Conference on Digital Government Research: Governance in the Data Age (dg.o ’18). ACM, New York, NY, USA, Article 50, 8 pages. https://doi.org/10.1145/3209281.3209347
  12. American Psychological Association (Ed.). 2020. Publication Manual of the American Psychological Association. The Official Guide to APA Style (7th ed.). American Psychological Association, Washington, D.C.
  13. On the Verification Complexity of Group Decision-Making Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1, 1 (2013), 2–8. https://doi.org/10.1609/hcomp.v1i1.13072
  14. CrowdMOT: Crowdsourcing Strategies for Tracking Multiple Objects in Videos. Proc. ACM Hum.-Comput. Interact. 4, CSCW3, Article 266 (2021), 25 pages. https://doi.org/10.1145/3434175
  15. Factors Influencing Users’ Information Requests: Medium, Target, and Extra-Topical Dimension. ACM Trans. Inf. Syst. 36, 4, Article 41 (2018), 37 pages. https://doi.org/10.1145/3209624
  16. Ready Player One! Eliciting Diverse Knowledge Using A Configurable Game. In Proceedings of the ACM Web Conference 2022. ACM, New York, NY, USA, 1709–1719. https://doi.org/10.1145/3485447.3512241
  17. Natã M. Barbosa and Monchu Chen. 2019. Rehumanized Crowdsourcing: A Labeling Framework Addressing Bias and Ethics in Machine Learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300773
  18. OpenSurfaces: A Richly Annotated Catalog of Surface Appearance. ACM Trans. Graph. 32, 4, Article 111 (2013), 17 pages. https://doi.org/10.1145/2461912.2462002
  19. David Benyon. 2013. Designing Interactive Systems: A Comprehensive Guide to HCI, UX and Interaction Design. Trans-Atlantic Publications, Inc.
  20. Michael Borish and Benjamin Lok. 2016. Rapid Low-Cost Virtual Human Bootstrapping via the Crowd. ACM Trans. Intell. Syst. Technol. 7, 4, Article 47 (2016), 20 pages. https://doi.org/10.1145/2897366
  21. The Influence of Crowd Type and Task Complexity on Crowdsourced Work Quality. In Proceedings of the 20th International Database Engineering & Applications Symposium (IDEAS ’16). ACM, New York, NY, USA, 70–76. https://doi.org/10.1145/2938503.2938511
  22. Sprout: Crowd-Powered Task Design for Crowdsourcing. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST ’18). ACM, New York, NY, USA, 165–176. https://doi.org/10.1145/3242587.3242598
  23. “Why Would Anybody Do This?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 2246–2257. https://doi.org/10.1145/2858036.2858198
  24. Paraphrase Acquisition via Crowdsourcing and Machine Learning. ACM Trans. Intell. Syst. Technol. 4, 3, Article 43 (2013), 21 pages. https://doi.org/10.1145/2483669.2483676
  25. Crowdsourcing Subjective Fashion Advice Using VizWiz: Challenges and Opportunities. In Proceedings of the 14th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’12). ACM, New York, NY, USA, 135–142. https://doi.org/10.1145/2384916.2384941
  26. Choice of Voices: A Large-Scale Evaluation of Text-to-Speech Voice Quality for Long-Form Content. ACM, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376789
  27. How to Tell Ancient Signs Apart? Recognizing and Visualizing Maya Glyphs with CNNs. J. Comput. Cult. Herit. 11, 4, Article 20 (2018), 25 pages. https://doi.org/10.1145/3230670
  28. Seeing Sound: Investigating the Effects of Visualizations and Complexity on Crowdsourced Audio Annotations. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 29 (2017), 21 pages. https://doi.org/10.1145/3134664
  29. Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 2334–2346. https://doi.org/10.1145/3025453.3026044
  30. Got Many Labels? Deriving Topic Labels from Multiple Sources for Social Media Posts Using Crowdsourcing and Ensemble Learning. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15 Companion). ACM, New York, NY, USA, 397–406. https://doi.org/10.1145/2740908.2745401
  31. Using Crowdsourcing for Scientific Analysis of Industrial Tomographic Images. ACM Trans. Intell. Syst. Technol. 7, 4, Article 52 (2016), 25 pages. https://doi.org/10.1145/2897370
  32. Measuring Crowdsourcing Effort with Error-Time Curves. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 1365–1374. https://doi.org/10.1145/2702123.2702145
  33. A Multi-Site Field Study of Crowdsourced Contextual Help: Usage and Perspectives of End Users and Software Teams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 217–226. https://doi.org/10.1145/2470654.2470685
  34. Assessing Top-k Preferences. ACM Trans. Inf. Syst. 39, 3, Article 33 (2021), 21 pages. https://doi.org/10.1145/3451161
  35. A beginner’s guide and best practices for using crowdsourcing platforms for survey research: The case of Amazon Mechanical Turk (MTurk). Journal of Global Business Insights 6, 1 (2021), 92–97. https://doi.org/10.5038/2640-6489.6.1.1177
  36. Threats of a Replication Crisis in Empirical Computer Science. Commun. ACM 63, 8 (2020), 70–79. https://doi.org/10.1145/3360311
  37. Michael Correll and Jeffrey Heer. 2017. Regression by Eye: Estimating Trends in Bivariate Visualizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1387–1396. https://doi.org/10.1145/3025453.3025922
  38. Value-Suppressing Uncertainty Palettes. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3174216
  39. Crowdsourcing-code.com. 2017. Ground Rules for Paid Crowdsourcing/Crowdworking. Guideline for a prosperous and fair cooperation between crowdsourcing companies and crowdworkers. https://www.crowdsourcing-code.com/media/documents/Code_of_Conduct_EN.pdf
  40. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Comput. Surv. 51, 1, Article 7 (2018), 40 pages. https://doi.org/10.1145/3148148
  41. Evaluating Crowdworkers as a Proxy for Online Learners in Video-Based Learning Contexts. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 42 (2018), 16 pages. https://doi.org/10.1145/3274311
  42. Paying Crowd Workers for Collaborative Work. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 125 (2019), 24 pages. https://doi.org/10.1145/3359227
  43. Nicholas J. DeVito and Ben Goldacre. 2019. Catalogue of Bias: Publication Bias. BMJ Evidence-Based Medicine 24, 2 (2019), 53–54. https://doi.org/10.1136/bmjebm-2018-111107
  44. Towards Understanding and Supporting Journalistic Practices Using Semi-Automated News Discovery Tools. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 406 (2021), 30 pages. https://doi.org/10.1145/3479550
  45. Demographics and Dynamics of Mechanical Turk Workers. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM ’18). ACM, New York, NY, USA, 135–143. https://doi.org/10.1145/3159652.3159661
  46. Scaling-up the Crowd: Micro-task Pricing Schemes for Worker Retention and Latency Improvement. In Second AAAI Conference on Human Computation and Crowdsourcing. AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v2i1.13154
  47. The Dynamics of Micro-Task Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 238–247. https://doi.org/10.1145/2736277.2741685
  48. Narratives in Crowdsourced Evaluation of Visualizations: A Double-Edged Sword?. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 5475–5484. https://doi.org/10.1145/3025453.3025870
  49. A Pilot Study of Using Crowds in the Classroom. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 227–236. https://doi.org/10.1145/2470654.2470686
  50. A Checklist to Combat Cognitive Biases in Crowdsourcing. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 9. AAAI, Palo Alto, CA, USA, 48–59. https://doi.org/10.1609/hcomp.v9i1.18939
  51. Crowdsourcing Interface Feature Design with Bayesian Optimization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300482
  52. Crowdsourcing Design Guidance for Contextual Adaptation of Text Content in Augmented Reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 731, 14 pages. https://doi.org/10.1145/3411764.3445493
  53. Dynamo Contributors. 2014. Guidelines for Academic Requesters. Version 1.1 (10/2/2014). , 25 pages. https://irb.northwestern.edu/docs/guidelinesforacademicrequesters-1.pdf
  54. Florian Echtler and Maximilian Häußler. 2018. Open Source, Open Science, and the Replication Crisis in HCI. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–8. https://doi.org/10.1145/3170427.3188395
  55. Improving Reactions to Rejection in Crowdsourcing Through Self-Reflection. In Proceedings of the 13th ACM Web Science Conference 2021 (WebSci ’21). ACM, New York, NY, USA, 74–83. https://doi.org/10.1145/3447535.3462482
  56. Carsten Eickhoff. 2014. Crowd-Powered Experts: Helping Surgeons Interpret Breast Cancer Images. In Proceedings of the First International Workshop on Gamification for Information Retrieval (GamifIR ’14). ACM, New York, NY, USA, 53–56. https://doi.org/10.1145/2594776.2594788
  57. Quality through Flow and Immersion: Gamifying Crowdsourced Relevance Assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’12). ACM, New York, NY, USA, 871–880. https://doi.org/10.1145/2348283.2348400
  58. Irene Eleta and Jennifer Golbeck. 2012. A Study of Multilingual Social Tagging of Art Images: Cultural Bridges and Diversity. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12). ACM, New York, NY, USA, 695–704. https://doi.org/10.1145/2145204.2145310
  59. CrowdCO-OP: Sharing Risks and Rewards in Crowdsourcing. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 132 (2020), 24 pages. https://doi.org/10.1145/3415203
  60. Oluwaseyi Feyisetan and Elena Simperl. 2019. Beyond Monetary Incentives: Experiments in Paid Microtask Contests. Trans. Soc. Comput. 2, 2, Article 6 (2019), 31 pages. https://doi.org/10.1145/3321700
  61. The Impact of Algorithmic Risk Assessments on Human Predictions and Its Analysis via Crowdsourcing Studies. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 428 (2021), 24 pages. https://doi.org/10.1145/3479572
  62. Erin D. Foster and Ariel Deardorff. 2017. Open Science Framework (OSF). Journal of the Medical Library Association (JMLA) 105, 2 (2017), 203. https://doi.org/10.5195/jmla.2017.88
  63. Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 49 (2017), 29 pages. https://doi.org/10.1145/3130914
  64. Ujwal Gadiraju and Gianluca Demartini. 2019. Understanding Worker Moods and Reactions to Rejection in Crowdsourcing. In Proceedings of the 30th ACM Conference on Hypertext and Social Media (HT ’19). ACM, New York, NY, USA, 211–220. https://doi.org/10.1145/3342220.3343644
  65. Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing. IEEE Intelligent Systems 30, 4 (2015), 81–85. https://doi.org/10.1109/MIS.2015.66
  66. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1631–1640. https://doi.org/10.1145/2702123.2702443
  67. Crowdsourcing Versus the Laboratory: Towards Human-Centered Experiments Using the Crowd. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hoßfeld (Eds.). Springer International Publishing, Cham, 6–26. https://doi.org/10.1007/978-3-319-66435-4_2
  68. Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media (HT ’17). ACM, New York, NY, USA, 5–14. https://doi.org/10.1145/3078714.3078715
  69. Analyzing Knowledge Gain of Users in Informational Search Sessions on the Web. In Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (CHIIR ’18). ACM, New York, NY, USA, 2–11. https://doi.org/10.1145/3176349.3176381
  70. Barney G. Glaser and Anselm L. Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transaction, Piscataway, New Jersey.
  71. Crowdsourcing on the Spot: Altruistic Use of Public Displays, Feasibility, Performance, and Behaviours. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13). ACM, New York, NY, USA, 753–762. https://doi.org/10.1145/2493432.2493481
  72. Game of Words: Tagging Places through Crowdsourcing on Public Displays. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ’14). ACM, New York, NY, USA, 705–714. https://doi.org/10.1145/2598510.2598514
  73. Mobile and situated crowdsourcing. International Journal of Human-Computer Studies 102 (2017), 1–3. https://doi.org/10.1016/j.ijhcs.2016.12.001
  74. Leo A. Goodman. 1961. Snowball Sampling. The Annals of Mathematical Statistics 32, 1 (1961), 148–170. http://www.jstor.org/stable/2237615
  75. Mary L. Gray and Siddharth Suri. 2019. Ghost Work. How to stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt, Boston and New York, N.Y.
  76. User-Defined Interface Gestures: Dataset and Analysis. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces (ITS ’14). ACM, New York, NY, USA, 25–34. https://doi.org/10.1145/2669485.2669511
  77. All Those Wasted Hours: On Task Abandonment in Crowdsourcing. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, New York, NY, USA, 321–329. https://doi.org/10.1145/3289600.3291035
  78. The Impact of Task Abandonment in Crowdsourcing. IEEE Transactions on Knowledge and Data Engineering 33, 5 (2019), 2266–2279. https://doi.org/10.1109/TKDE.2019.2948168
  79. An Analysis of the Australian Political Discourse in Sponsored Social Media Content. In Proceedings of the 25th Australasian Document Computing Symposium (ADCS ’21). ACM, New York, NY, USA, Article 1, 5 pages. https://doi.org/10.1145/3503516.3503533
  80. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174023
  81. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View. In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’13). ACM, New York, NY, USA, Article 16, 8 pages. https://doi.org/10.1145/2513383.2513448
  82. Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View: An Extended Analysis. ACM Trans. Access. Comput. 6, 2, Article 5 (2015), 23 pages. https://doi.org/10.1145/2717513
  83. Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST ’14). ACM, New York, NY, USA, 189–204. https://doi.org/10.1145/2642918.2647403
  84. Chris Harrison and Haakon Faste. 2014. Implications of Location and Touch for On-Body Projected Interfaces. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ’14). ACM, New York, NY, USA, 543–552. https://doi.org/10.1145/2598510.2598587
  85. It is Like Finding a Polar Bear in the Savannah! Concept-level AI Explanations with Analogical Inference from Commonsense Knowledge. In Proceedings of the Conference on Human Computation and Crowdsourcing (HCOMP ’22, Vol. 10). AAAI, Palo Alto, CA, USA, 89–101. https://doi.org/10.1609/hcomp.v10i1.21990
  86. Gary T. Henry. 2002. Practical Sampling. Sage, Newbury Park.
  87. CrowdCog: A Cognitive Skill Based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 110 (2020), 22 pages. https://doi.org/10.1145/3415181
  88. Waisda? Video Labeling Game. In Proceedings of the 21st ACM International Conference on Multimedia (MM ’13). ACM, New York, NY, USA, 823–826. https://doi.org/10.1145/2502081.2502221
  89. Crowd-Based Study of Gameplay Impairments and Player Performance in DOTA 2. In Proceedings of the 4th Internet-QoE Workshop on QoE-Based Analysis and Management of Data Communication Networks (Internet-QoE’19). ACM, New York, NY, USA, 19–24. https://doi.org/10.1145/3349611.3355545
  90. Incentivizing High Quality Crowdwork. In Proceedings of the 24th International Conference on World Wide Web (WWW ’15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 419–429. https://doi.org/10.1145/2736277.2741102
  91. Jonggi Hong and Leah Findlater. 2018. Identifying Speech Input Errors Through Audio-Only Interaction. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3174141
  92. Situated Crowdsourcing Using a Market Model. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST ’14). ACM, New York, NY, USA, 55–64. https://doi.org/10.1145/2642918.2647362
  93. Crowdsourcing Treatments for Low Back Pain. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173850
  94. VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300892
  95. Crowdsourcing Detection of Sampling Biases in Image Datasets. ACM, New York, NY, USA, 2955–2961. https://doi.org/10.1145/3366423.3380063
  96. Shih-Wen Huang and Wai-Tat Fu. 2013. Don’t Hide in the Crowd! Increasing Social Transparency between Peer Workers Improves Crowdsourcing Outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 621–630. https://doi.org/10.1145/2470654.2470743
  97. Supporting ESL Writing by Prompting Crowdsourced Structural Feedback. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 5, 1 (2017), 71–78. https://doi.org/10.1609/hcomp.v5i1.13313
  98. Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300637
  99. Ken Hyland. 1996. Writing Without Conviction? Hedging in Science Research Articles. Applied Linguistics 17, 4 (1996), 433–454. https://doi.org/10.1093/applin/17.4.433
  100. Kazushi Ikeda and Michael S. Bernstein. 2016. Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4111–4121. https://doi.org/10.1145/2858036.2858327
  101. Junyong In. 2017. Introduction of a Pilot Study. Korean Journal of Anesthesiology 70, 6 (2017), 601–605. https://doi.org/10.4097/kjae.2017.70.6.601
  102. Studying Topical Relevance with Evidence-Based Crowdsourcing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). ACM, New York, NY, USA, 1253–1262. https://doi.org/10.1145/3269206.3271779
  103. Lilly C. Irani and M. Six Silberman. 2013. Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 611–620. https://doi.org/10.1145/2470654.2470742
  104. Kasthuri Jayarajah and Archan Misra. 2018. Predicting Episodes of Non-Conformant Mobility in Indoor Environments. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 172 (2018), 24 pages. https://doi.org/10.1145/3287050
  105. Introducing Game Elements in Crowdsourced Video Captioning by Non-Experts. In Proceedings of the 11th Web for All Conference (W4A ’14). ACM, New York, NY, USA, Article 29, 4 pages. https://doi.org/10.1145/2596695.2596713
  106. Collaboration Trumps Homophily in Urban Mobile Crowdsourcing. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17). ACM, New York, NY, USA, 902–915. https://doi.org/10.1145/2998181.2998311
  107. Du Bois Wrapped Bar Chart: Visualizing Categorical Data with Disproportionate Values. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376365
  108. Web Page Segmentation Revisited: Evaluation Framework and Dataset. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM ’20). ACM, New York, NY, USA, 3047–3054. https://doi.org/10.1145/3340531.3412782
  109. Crowdsourcing Step-by-Step Information Extraction to Enhance Existing How-to Videos. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 4017–4026. https://doi.org/10.1145/2556288.2556986
  110. Lawrence H. Kim and Sean Follmer. 2017. UbiSwarm: Ubiquitous Robotic Interfaces and Investigation of Abstract Motion as a Display. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1, 3, Article 66 (2017), 20 pages. https://doi.org/10.1145/3130931
  111. How to Filter out Random Clickers in a Crowdsourcing-Based Study?. In Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors – Novel Evaluation Methods for Visualization (BELIV ’12). ACM, New York, NY, USA, Article 15, 7 pages. https://doi.org/10.1145/2442576.2442591
  112. Crowdsourcing User Studies with Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). ACM, New York, NY, USA, 453–456. https://doi.org/10.1145/1357054.1357127
  113. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW ’13). ACM, New York, NY, USA, 1301–1318. https://doi.org/10.1145/2441776.2441923
  114. Supporting Image Geolocation with Diagramming and Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 5, 1 (2017), 98–107. https://doi.org/10.1609/hcomp.v5i1.13296
  115. Evaluating Preference Collection Methods for Interactive Ranking Analytics. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300742
  116. Collaboratively Crowdsourcing Workflows with Turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12). ACM, New York, NY, USA, 1003–1012. https://doi.org/10.1145/2145204.2145354
  117. Understanding Narrative Linearity for Telling Expressive Time-Oriented Stories. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 604, 13 pages. https://doi.org/10.1145/3411764.3445344
  118. Curiosity Killed the Cat, but Makes Crowdwork Better. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4098–4110. https://doi.org/10.1145/2858036.2858144
  119. Crowdclass: Designing Classification-Based Citizen Science Learning Modules. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4, 1 (2016), 109–118. https://doi.org/10.1609/hcomp.v4i1.13273
  120. Michael J. Lee and Amy J. Ko. 2015. Comparing the Effectiveness of Online Learning Approaches on CS1 Learning Outcomes. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER ’15). ACM, New York, NY, USA, 237–246. https://doi.org/10.1145/2787622.2787709
  121. Semi-Situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI ’16). ACM, New York, NY, USA, 13–20. https://doi.org/10.1145/2993148.2993190
  122. Ask Me or Tell Me? Enhancing the Effectiveness of Crowdsourced Design Feedback. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 564, 12 pages. https://doi.org/10.1145/3411764.3445507
  123. The Role and Interpretation of Pilot Studies in Clinical Research. Journal of Psychiatric Research 45, 5 (2011), 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008
  124. Blaine Lewis and Daniel Vogel. 2020. Longer Delays in Rehearsal-Based Interfaces Increase Expert Use. ACM Trans. Comput.-Hum. Interact. 27, 6, Article 45 (2020), 41 pages. https://doi.org/10.1145/3418196
  125. Dropping the Baton? Understanding Errors and Bottlenecks in a Crowdsourced Sensemaking Pipeline. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 136 (2019), 26 pages. https://doi.org/10.1145/3359238
  126. Elements of Style: Learning Perceptual Shape Style Similarity. ACM Trans. Graph. 34, 4, Article 84 (2015), 14 pages. https://doi.org/10.1145/2766929
  127. Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3, 1 (2015), 110–119. https://doi.org/10.1609/hcomp.v3i1.13239
  128. Personality Matters: Balancing for Personality Types Leads to Better Outcomes for Crowd Teams. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW ’16). ACM, New York, NY, USA, 260–273. https://doi.org/10.1145/2818048.2819979
  129. Malcolm MacLeod. 2021. An “Omics” Answer to the Replication Crisis. https://future.com/publomics-replication-crisis/
  130. Mapping Points of Interest Through Street View Imagery and Paid Crowdsourcing. ACM Trans. Intell. Syst. Technol. 11, 5, Article 63 (2020), 28 pages. https://doi.org/10.1145/3403931
  131. TaskLint: Automated Detection of Ambiguities in Task Instructions. In Proceedings of the Conference on Human Computation and Crowdsourcing (HCOMP ’22). AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v10i1.21996
  132. Volunteering Versus Work for Pay: Incentives and Tradeoffs In Crowdsourcing. In First AAAI Conference on Human Computation and Crowdsourcing. AAAI, Palo Alto, CA, USA. https://doi.org/10.1609/hcomp.v1i1.13075
  133. Being a Turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’14). ACM, New York, NY, USA, 224–235. https://doi.org/10.1145/2531602.2531663
  134. Thomas Mattauch. 2013. Innovate through Crowd Sourcing. In Proceedings of the 41st Annual ACM SIGUCCS Conference on User Services (SIGUCCS ’13). ACM, New York, NY, USA, 39–42. https://doi.org/10.1145/2504776.2504796
  135. Reliability and Inter-Rater Reliability in Qualitative Research: Norms and Guidelines for CSCW and HCI Practice. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 72 (2019), 23 pages. https://doi.org/10.1145/3359174
  136. Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4, 1 (2016), 139–148. https://doi.org/10.1609/hcomp.v4i1.13287
  137. Taking a HIT: Designing Around Rejection, Mistrust, Risk, and Workers’ Experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’2016). 2271–2282. https://doi.org/10.1145/2858036.2858539
  138. Building a Large-Scale Corpus for Evaluating Event Detection on Twitter. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management (CIKM ’13). ACM, New York, NY, USA, 409–418. https://doi.org/10.1145/2505515.2505695
  139. Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson’s. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4464–4476. https://doi.org/10.1145/2858036.2858321
  140. Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face Recognition. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 86–96. https://doi.org/10.1609/hcomp.v7i1.5272
  141. Can Anthropographics Promote Prosociality? A Review and Large-Sample Study. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 611, 18 pages. https://doi.org/10.1145/3411764.3445637
  142. Yashar Moshfeghi and Alvaro Francisco Huertas-Rosero. 2021. A Game Theory Approach for Estimating Reliability of Crowdsourced Relevance Assessments. ACM Trans. Inf. Syst. 40, 3, Article 60 (2021), 29 pages. https://doi.org/10.1145/3480965
  143. Crowdsourcing Real-Time Viral Disease and Pest Information: A Case of Nation-Wide Cassava Disease Surveillance in a Developing Country. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 6, 1 (2018), 117–125. https://doi.org/10.1609/hcomp.v6i1.13322
  144. Ranked-List Visualization: A Graphical Perception Study. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300422
  145. TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data. ACM, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376799
  146. Using Crowdsourcing to Investigate Perception of Narrative Similarity. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM ’14). ACM, New York, NY, USA, 321–330. https://doi.org/10.1145/2661829.2661918
  147. ReVISit: Looking Under the Hood of Interactive Visualization Studies. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 25, 13 pages. https://doi.org/10.1145/3411764.3445382
  148. What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing. In Proceedings of the 32nd ACM Conference on Hypertext and Social Media (HT ’21). ACM, New York, NY, USA, 165–175. https://doi.org/10.1145/3465336.3475109
  149. Mechanical Turk as an Ontology Engineer? Using Microtasks as a Component of an Ontology-Engineering Workflow. In Proceedings of the 5th Annual ACM Web Science Conference (WebSci ’13). ACM, New York, NY, USA, 262–271. https://doi.org/10.1145/2464464.2464482
  150. Jonas Oppenlaender and Simo Hosio. 2019. Design Recommendations for Augmenting Creative Tasks with Computational Priming. In Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (MUM ’19). ACM, New York, NY, USA, Article 35, 13 pages. https://doi.org/10.1145/3365610.3365621
  151. Creativity on Paid Crowdsourcing Platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). ACM, New York, NY, USA, Article 548, 14 pages. https://doi.org/10.1145/3313831.3376677
  152. CrowdUI: Supporting Web Design with the Crowd. Proc. ACM Hum.-Comput. Interact. 4, EICS, Article 76 (2020), 28 pages. https://doi.org/10.1145/3394978
  153. What do crowd workers think about creative work?. In Workshop on Worker-Centered Design: Expanding HCI Methods for Supporting Labor. 4 pages pages. https://creativity-crowdsourcing.github.io/
  154. Incremental Acquisition and Reuse of Multimodal Affective Behaviors in a Conversational Agent. In Proceedings of the 6th International Conference on Human-Agent Interaction (HAI ’18). ACM, New York, NY, USA, 92–100. https://doi.org/10.1145/3284432.3284469
  155. How Deceptive Are Deceptive Visualizations? An Empirical Analysis of Common Distortion Techniques. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 1469–1478. https://doi.org/10.1145/2702123.2702608
  156. Running Experiments on Amazon Mechanical Turk. Judgment and Decision Making 5, 5 (2010), 411–419. https://doi.org/10.1017/S1930297500002205
  157. Towards Automating Disambiguation of Regulations: Using the Wisdom of Crowds. ACM, New York, NY, USA, 850–855. https://doi.org/10.1145/3238147.3240727
  158. Quality Control in Crowdsourcing Based on Fine-Grained Behavioral Features. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 442 (2021), 28 pages. https://doi.org/10.1145/3479586
  159. What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 125–134. https://doi.org/10.1609/hcomp.v7i1.5281
  160. Mark Petticrew and Helen Roberts. 2006a. Exploring Heterogeneity and Publication Bias. John Wiley & Sons, Ltd, Malden, MA, Chapter 7, 215–246. https://doi.org/10.1002/9780470754887.ch7
  161. Mark Petticrew and Helen Roberts. 2006b. Starting the Review: Refining the Question and Defining the Boundaries. John Wiley & Sons, Ltd, Chapter 2, 27–56. https://doi.org/10.1002/9780470754887.ch2
  162. Mark Petticrew and Helen Roberts. 2006c. Systematic Reviews in the Social Sciences. A Practical Guide. Blackwell Publishing, Malden, MA.
  163. Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 135–143. https://doi.org/10.1609/hcomp.v7i1.5264
  164. Time-Efficient Geo-Obfuscation to Protect Worker Location Privacy over Road Networks in Spatial Crowdsourcing. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management (CIKM ’20). ACM, New York, NY, USA, 1275–1284. https://doi.org/10.1145/3340531.3411863
  165. Using Worker Avatars to Improve Microtask Crowdsourcing. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–28. https://doi.org/10.1145/3476063
  166. VirtualCrowd: A Simulation Platform for Microtask Crowdsourcing Campaigns. ACM, New York, NY, USA, 222–225. https://doi.org/10.1145/3366424.3383546
  167. HAC-ER: A Disaster Response System Based on Human-Agent Collectives. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS ’15). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 533–541.
  168. DREC: Towards a Datasheet for Reporting Experiments in Crowdsourcing. ACM, New York, NY, USA, 377–382. https://doi.org/10.1145/3406865.3418318
  169. On the Impact of Predicate Complexity in Crowdsourced Classification Tasks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining (WSDM ’21). ACM, New York, NY, USA, 67–75. https://doi.org/10.1145/3437963.3441831
  170. On the State of Reporting in Crowdsourcing Experiments and a Checklist to Aid Current Practices. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 387 (2021), 34 pages. https://doi.org/10.1145/3479531
  171. Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 144–152. https://doi.org/10.1609/hcomp.v7i1.5268
  172. Amy Rechkemmer and Ming Yin. 2020. Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8, 1 (2020), 122–131. https://doi.org/10.1609/hcomp.v8i1.7470
  173. Graphical Perception of Continuous Quantitative Maps: The Effects of Spatial Frequency and Colormap Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173846
  174. Janice Redish and Sharon J. Laskowsk. 2009. Guidelines for Writing Clear Instructions and Messages for Voters and Poll Workers. Technical Report NISTIR 7596. National Institute of Standards and Technology. https://www.nist.gov/publications/guidelines-writing-clear-instructions-and-messages-voters-and-poll-workers
  175. CRUX: Adaptive Querying for Efficient Crowdsourced Data Extraction. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management (CIKM ’19). ACM, New York, NY, USA, 841–850. https://doi.org/10.1145/3357384.3357976
  176. Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 393, 35 pages. https://doi.org/10.1145/3411764.3445782
  177. “I Can’t Reply with That”: Characterizing Problematic Email Reply Suggestions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 724, 18 pages. https://doi.org/10.1145/3411764.3445557
  178. Mining and Quality Assessment of Mashup Model Patterns with the Crowd: A Feasibility Study. ACM Trans. Internet Technol. 16, 3, Article 17 (2016), 27 pages. https://doi.org/10.1145/2903138
  179. Can The Crowd Identify Misinformation Objectively? The Effects of Judgment Scale and Assessor’s Background. ACM, New York, NY, USA, 439–448. https://doi.org/10.1145/3397271.3401112
  180. Automation Accuracy Is Good, but High Controllability May Be Better. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–8. https://doi.org/10.1145/3290605.3300750
  181. Comparing Generic and Community-Situated Crowdsourcing for Data Validation in the Context of Recovery from Substance Use Disorders. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 449, 17 pages. https://doi.org/10.1145/3411764.3445399
  182. A Crowdsourcing Approach for Quality Enhancement of ELearning Systems. In Proceedings of the 10th Innovations in Software Engineering Conference (ISEC ’17). ACM, New York, NY, USA, 188–194. https://doi.org/10.1145/3021460.3021483
  183. Verifying Extended Entity Relationship Diagrams with Open Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8, 1 (2020), 132–140. https://doi.org/10.1609/hcomp.v8i1.7471
  184. Verifying Conceptual Domain Models with Human Computation: A Case Study in Software Engineering. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 6, 1 (2018), 164–173. https://doi.org/10.1609/hcomp.v6i1.13325
  185. WinoGrande: An Adversarial Winograd Schema Challenge at Scale. Commun. ACM 64, 9 (2021), 99–106. https://doi.org/10.1145/3474381
  186. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 1621–1630. https://doi.org/10.1145/2702123.2702508
  187. Communicating Context to the Crowd for Complex Writing Tasks. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17). ACM, New York, NY, USA, 1890–1901. https://doi.org/10.1145/2998181.2998332
  188. Resolvable vs. Irresolvable Disagreement: A Study on Worker Deliberation in Crowd Work. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 154 (2018), 19 pages. https://doi.org/10.1145/3274423
  189. Todd W. Schiller and Michael D. Ernst. 2012. Reducing the Barriers to Writing Verified Specifications. SIGPLAN Not. 47, 10 (2012), 95–112. https://doi.org/10.1145/2398857.2384624
  190. HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 3248–3260. https://doi.org/10.1145/2858036.2858279
  191. Talk to Your Crowd. Research-Technology Management 60, 4 (2017), 33–42. https://doi.org/10.1080/08956308.2017.1325689
  192. IdeaHound: Improving Large-Scale Collaborative Ideation with Crowd-Powered Real-Time Semantic Modeling. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST ’16). ACM, New York, NY, USA, 609–624. https://doi.org/10.1145/2984511.2984578
  193. Responsible Research with Crowds: Pay Crowdworkers at Least Minimum Wage. Commun. ACM 61, 3 (2018), 39–41. https://doi.org/10.1145/3180492
  194. Studying the “Wisdom of Crowds” at Scale. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 171–179. https://doi.org/10.1609/hcomp.v7i1.5271
  195. “I Hope This Is Helpful”: Understanding Crowdworkers’ Challenges and Motivations for an Image Description Task. Proc. ACM Hum.-Comput. Interact. 4, CSCW2, Article 105 (2020), 26 pages. https://doi.org/10.1145/3415176
  196. Elena Simperl. 2021. How to Use Crowdsourcing Effectively: Guidelines and Examples. LIBER Quarterly: The Journal of the Association of European Research Libraries 25, 1 (2021), 18–39. https://doi.org/10.18352/lq.9948
  197. CrowdLayout: Crowdsourced Design and Evaluation of Biological Network Visualizations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3173806
  198. Older Adults and Crowdsourcing: Android TV App for Evaluating TEDx Subtitle Quality. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 159 (2018), 23 pages. https://doi.org/10.1145/3274428
  199. Stephen Smart and Danielle Albers Szafir. 2019. Measuring the Separability of Shape, Size, and Color in Scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300899
  200. Quantifying Visual Abstraction Quality for Computer-Generated Illustrations. ACM Trans. Appl. Percept. 16, 1, Article 5 (2019), 20 pages. https://doi.org/10.1145/3301414
  201. Rural Communities Crowdsource Technology Development: A Namibian Expedition. In Proceedings of the Sixth International Conference on Information and Communications Technologies and Development: Notes - Volume 2 (ICTD ’13). ACM, New York, NY, USA, 155–158. https://doi.org/10.1145/2517899.2517930
  202. The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 341, 14 pages. https://doi.org/10.1145/3411764.3445092
  203. James Surowiecki. 2005. The Wisdom of Crowds. Anchor, New York, NY, USA.
  204. What Are the Biases in My Word Embedding?. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’19). ACM, New York, NY, USA, 305–311. https://doi.org/10.1145/3306618.3314270
  205. Meerkat and Periscope: I Stream, You Stream, Apps Stream for Live Streams. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4770–4780. https://doi.org/10.1145/2858036.2858374
  206. Using Crowd Sourcing to Measure the Effects of System Response Delays on User Engagement. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4413–4422. https://doi.org/10.1145/2858036.2858572
  207. Crowdsourcing Ground Truth for Question Answering Using CrowdTruth. In Proceedings of the ACM Web Science Conference (WebSci ’15). ACM, New York, NY, USA, Article 61, 2 pages. https://doi.org/10.1145/2786451.2786492
  208. Quantifying the Invisible Labor in Crowd Work. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 319 (2021), 26 pages. https://doi.org/10.1145/3476060
  209. Rating Worker Skills and Task Strains in Collaborative Crowd Computing: A Competitive Perspective. In The World Wide Web Conference (WWW ’19). ACM, New York, NY, USA, 1853–1863. https://doi.org/10.1145/3308558.3313569
  210. Amos Tversky and Daniel Kahneman. 1973. Availability: A heuristic for judging frequency and probability. Cognitive Psychology (1973), 207–232. https://doi.org/10.1016/0010-0285(73)90033-9
  211. Investigating the Accessibility of Crowdwork Tasks on Mechanical Turk. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). ACM, New York, NY, USA, Article 381, 14 pages. https://doi.org/10.1145/3411764.3445291
  212. Crowd Research: Open and Scalable University Laboratories. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST ’17). ACM, New York, NY, USA, 829–843. https://doi.org/10.1145/3126594.3126648
  213. Creating Crowdsourced Research Talks at Scale. In Proceedings of the 2018 World Wide Web Conference (WWW ’18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1–11. https://doi.org/10.1145/3178876.3186031
  214. Analyzing Workers Performance in Online Mapping Tasks Across Web, Mobile, and Virtual Reality Platforms. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8, 1 (Oct. 2020), 141–149. https://doi.org/10.1609/hcomp.v8i1.7472
  215. Edwin Van Teijlingen and Vanora Hundley. 2002. The Importance of Pilot Studies. Nursing Standard 16, 40 (2002), 33. https://doi.org/10.7748/ns2002.06.16.40.33.c3214
  216. Keith Vertanen and Per Ola Kristensson. 2014. Complementing Text Entry Evaluations with a Composition Task. ACM Trans. Comput.-Hum. Interact. 21, 2, Article 8 (2014), 33 pages. https://doi.org/10.1145/2555691
  217. Ruben Vicente-Saez and Clara Martinez-Fuentes. 2018. Open Science Now: A Systematic Literature Review for an Integrated Definition. Journal of business research 88 (2018), 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043
  218. Visual Encodings for Networks with Multiple Edge Types. In Proceedings of the International Conference on Advanced Visual Interfaces. ACM, New York, NY, USA, Article 37, 9 pages. https://doi.org/10.1145/3399715.3399827
  219. Standing on the Shoulders of Giants: Challenges and Recommendations of Literature Search in Information Systems Research. Communications of the Association for Information Systems 37 (2015). https://doi.org/10.17705/1CAIS.03709
  220. Modeling Image Appeal Based on Crowd Preferences for Automated Person-Centric Collage Creation. In Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia (CrowdMM ’14). ACM, New York, NY, USA, 9–15. https://doi.org/10.1145/2660114.2660126
  221. Whose AI Dream? In Search of the Aspiration in Data Annotation.. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, New York, NY, USA, Article 582, 16 pages. https://doi.org/10.1145/3491102.3502121
  222. Exploring Trade-Offs Between Learning and Productivity in Crowdsourced History. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 178 (2018), 24 pages. https://doi.org/10.1145/3274447
  223. In Their Shoes: A Structured Analysis of Job Demands, Resources, Work Experiences, and Platform Commitment of Crowdworkers in China. Proc. ACM Hum.-Comput. Interact. 4, GROUP, Article 07 (2020), 40 pages. https://doi.org/10.1145/3375187
  224. Crowdsourced Mobile Data Collection: Lessons Learned from a New Study Methodology. In Proceedings of the 15th Workshop on Mobile Computing Systems and Applications (HotMobile ’14). ACM, New York, NY, USA, Article 2, 6 pages. https://doi.org/10.1145/2565585.2565608
  225. Rapid Instance-Level Knowledge Acquisition for Google Maps from Class-Level Common Sense. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 9, 1 (2021), 143–154. https://doi.org/10.1609/hcomp.v9i1.18947
  226. Supporting Virtual Team Formation through Community-Wide Deliberation. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 109 (2017), 19 pages. https://doi.org/10.1145/3134744
  227. Etienne Wenger. 2011. Communities of Practice: A Brief Introduction. http://hdl.handle.net/1794/11736
  228. Crowd Guilds: Worker-Led Reputation and Feedback on Crowdsourcing Platforms. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17). ACM, New York, NY, USA, 1902–1913. https://doi.org/10.1145/2998181.2998234
  229. Fair Work: Crowd Work Minimum Wage with One Line of Code. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (2019), 197–206. https://doi.org/10.1609/hcomp.v7i1.5283
  230. Analyzing Privacy Policies at Scale: From Crowdsourcing to Automated Annotations. ACM Trans. Web 13, 1, Article 1 (2018), 29 pages. https://doi.org/10.1145/3230665
  231. Crowdsourcing Annotations for Websites’ Privacy Policies: Can It Really Work?. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 133–143. https://doi.org/10.1145/2872427.2883035
  232. Improving Model Inspection with Crowdsourcing. In Proceedings of the 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE ’17). IEEE, 30–34. https://doi.org/10.1109/CSI-SE.2017.2
  233. Why Design Matters: Crowdsourcing of Complex Tasks. In Proceedings of the Fourth International Workshop on Crowdsourcing for Multimedia (CrowdMM ’15). ACM, New York, NY, USA, 27–32. https://doi.org/10.1145/2810188.2810190
  234. Peng Xu and Martha Larson. 2014. Users Tagging Visual Moments: Timed Tags in Social Video. In Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia (CrowdMM ’14). ACM, New York, NY, USA, 57–62. https://doi.org/10.1145/2660114.2660124
  235. Schema and Metadata Guide the Collective Generation of Relevant and Diverse Work. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 8, 1 (2020), 178–182. https://doi.org/10.1609/hcomp.v8i1.7479
  236. Shota Yamanaka. 2021. Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance to Evaluate Error-Rate Models on GUIs. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 9, 1 (2021), 155–165. https://doi.org/10.1609/hcomp.v9i1.18948
  237. Understand Users’ Comprehension and Preferences for Composing Information Visualizations. ACM Trans. Comput.-Hum. Interact. 21, 1, Article 6 (2014), 30 pages. https://doi.org/10.1145/2541288
  238. Modeling Task Complexity in Crowdsourcing. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 4. AAAI, Palo Alto, CA, USA, 249–258. https://doi.org/10.1609/hcomp.v4i1.13283
  239. Towards a Sustainable Crowdsourced Sound Heritage Archive by Public Participation: The Soundsslike Project. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16). ACM, New York, NY, USA, Article 71, 9 pages. https://doi.org/10.1145/2971485.2971492
  240. Ming Yin and Yiling Chen. 2015. Bonus or Not? Learn to Reward in Crowdsourcing. In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI’15). AAAI, Palo Alto, CA, USA, 201–207. https://doi.org/10.5555/2832249.2832277
  241. The Communication Network Within the Crowd. In Proceedings of the 25th International Conference on World Wide Web (WWW ’16). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1293–1303. https://doi.org/10.1145/2872427.2883036
  242. Distributed Analogical Idea Generation with Multiple Constraints. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW ’16). ACM, New York, NY, USA, 1236–1245. https://doi.org/10.1145/2818048.2835201
  243. Predicting User Knowledge Gain in Informational Search Sessions. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’18). ACM, New York, NY, USA, 75–84. https://doi.org/10.1145/3209978.3210064
  244. Algorithmic Management Reimagined For Workers and By Workers: Centering Worker Well-Being in Gig Work. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). ACM, New York, NY, USA, Article 14, 20 pages. https://doi.org/10.1145/3491102.3501866
  245. Multidimensional Relevance Modeling via Psychometrics and Crowdsourcing. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR ’14). ACM, New York, NY, USA, 435–444. https://doi.org/10.1145/2600428.2609577
  246. Dissonance Between Human and Machine Understanding. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 56 (2019), 23 pages. https://doi.org/10.1145/3359158
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com