Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Crowd Memory: Learning in the Collective (1204.3678v2)

Published 17 Apr 2012 in cs.SI, cs.HC, and physics.soc-ph

Abstract: Crowd algorithms often assume workers are inexperienced and thus fail to adapt as workers in the crowd learn a task. These assumptions fundamentally limit the types of tasks that systems based on such algorithms can handle. This paper explores how the crowd learns and remembers over time in the context of human computation, and how more realistic assumptions of worker experience may be used when designing new systems. We first demonstrate that the crowd can recall information over time and discuss possible implications of crowd memory in the design of crowd algorithms. We then explore crowd learning during a continuous control task. Recent systems are able to disguise dynamic groups of workers as crowd agents to support continuous tasks, but have not yet considered how such agents are able to learn over time. We show, using a real-time gaming setting, that crowd agents can learn over time, and `remember' by passing strategies from one generation of workers to the next, despite high turnover rates in the workers comprising them. We conclude with a discussion of future research directions for crowd memory and learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Samuel C. White (1 paper)
  2. Kyle I. Murray (1 paper)
  3. Jeffrey P. Bigham (48 papers)
  4. Walter S Lasecki (11 papers)
Citations (28)

Summary

We haven't generated a summary for this paper yet.