Training attention mechanisms for global workspace selection and sequencing

Develop training procedures for the state-dependent attention mechanism in global workspace architectures that can select among module inputs to the workspace and reliably implement sequences of attentional operations needed to control extended, functional interactions among modules.

Background

Proposals for artificial implementations of global workspace (e.g., VanRullen and Kanai; Goyal et al.) outline architectures with bottlenecks, global broadcast, and attention, but lack concrete methods for training the attention mechanism to select inputs and orchestrate multi-step interactions.

Addressing this open question is critical for realizing practical systems that exhibit the dynamic, task-dependent module coordination central to GWT’s account of conscious processing.

References

However, this work is a "roadmap" to a possible implementation, rather than a working system. It faces a substantial open question about how the attention mechanism could be trained to select among the potential inputs to the workspace, and especially how this could achieve the sequences of operations of attention needed to control extended, functional sequences of operations by relevant modules.

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness  (2308.08708 - Butlin et al., 2023) in Section 3.1.2, Implementing GWT