Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Does Confidence Reporting from the Crowd Benefit Crowdsourcing Performance? (1704.00768v1)

Published 3 Apr 2017 in cs.SI and cs.HC

Abstract: We explore the design of an effective crowdsourcing system for an $M$-ary classification task. Crowd workers complete simple binary microtasks whose results are aggregated to give the final classification decision. We consider the scenario where the workers have a reject option so that they are allowed to skip microtasks when they are unable to or choose not to respond to binary microtasks. Additionally, the workers report quantized confidence levels when they are able to submit definitive answers. We present an aggregation approach using a weighted majority voting rule, where each worker's response is assigned an optimized weight to maximize crowd's classification performance. We obtain a couterintuitive result that the classification performance does not benefit from workers reporting quantized confidence. Therefore, the crowdsourcing system designer should employ the reject option without requiring confidence reporting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Qunwei Li (23 papers)
  2. Pramod K. Varshney (135 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.