Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Measuring Cognition and Cognitive Augmentation (2211.06477v1)

Published 11 Nov 2022 in cs.HC

Abstract: We are at the beginning of a new age in which artificial entities will perform significant amounts of high-level cognitive processing rivaling and even surpassing human thinking. The future belongs to those who can best collaborate with artificial cognitive entities achieving a high degree of cognitive augmenta-tion. However, we currently lack theoretically grounded fundamental metrics able to describe human or artificial cognition much less augmented and combined cognition. How do we measure thinking, cognition, information, and knowledge in an implementation-independent way? How can we tell how much thinking an artificial entity does and how much is done by a human? How can we measure the combined and possible even emergent effect of humans working together with intelligent artificial entities? These are some of the challenges for research-ers in this field. We first define a cognitive process as the transformation of data, information, knowledge, and wisdom. We then review several existing and emerging information metrics based on entropy, processing effort, quantum physics, emergent capacity, and human concept learning. We then discuss how these fail to answer the above questions and provide guidelines for future re-search.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Ron Fulbright (8 papers)
Citations (5)