Boolean Operations on Conceptors
- Boolean operations on conceptors are logical functions (AND, OR, NOT) applied to soft projection matrices that capture characteristic neural subspaces.
- These operations maintain positive semidefiniteness and eigenvalue constraints while extending classical Boolean algebra into nonlinear, dynamic neural representations.
- Applications include continual learning, neural memory modeling, and activation steering in large language models, enhancing model stability and performance.
A conceptor is a symmetric positive semidefinite matrix with eigenvalues in , functioning as a soft projection operator onto the characteristic linear subspace of a dataset or neural activation pattern. Boolean operations on conceptors—negation (NOT), conjunction (AND), and disjunction (OR)—systematically extend logical calculus into the nonlinear, dynamical setting of neural representations and abstract vector spaces. These operations have been foundational in recurrent neural network dynamics (Jaeger, 2014), continual representation learning (Liu et al., 2019), neural memory modeling (Strock et al., 2020), and activation engineering in LLMs (Postmus et al., 9 Oct 2024).
1. Definition of Conceptor Matrices
Given a collection of vectors , the corresponding conceptor minimizes the regularized reconstruction loss: where is the aperture parameter controlling the subspace's width. The closed-form solution is: with . The eigenvalues of satisfy , ensuring that is a soft, rather than hard, projector. In spectral terms, shrinks each principal direction according to the data variance and aperture.
2. Formalism of Boolean Operations
Boolean operations on conceptor matrices closely parallel standard set algebra, mapping geometric and logical relationships in activation space to closed algebraic forms.
Negation (NOT)
This operator captures the orthogonal complement to the subspace encoded by , softly projecting onto directions not present in the original manifold (Liu et al., 2019, Jaeger, 2014, Jaeger, 2014, Postmus et al., 9 Oct 2024).
Conjunction (AND)
projects onto components simultaneously present in and , corresponding to the intersection of their ellipsoidal regions (Jaeger, 2014). The Moore–Penrose pseudoinverse is applied when is singular.
Disjunction (OR)
Multiple mathematically equivalent definitions are recognized: Alternately, for data sharing principal axes, the union formula becomes: OR spans all directions captured by either conceptor, yielding the union ellipsoid (Jaeger, 2014, Liu et al., 2019). These operations preserve positive semidefiniteness and ensure eigenvalues remain in .
3. Algebraic Properties and Geometric Intuition
Boolean operations on conceptors form an algebra satisfying most classical Boolean identities, modulo the non-commutativity and dimensionality constraints inherent in matrix algebra (Jaeger, 2014, Jaeger, 2014):
- Commutativity:
- Associativity: , analogously for
- Absorption: ,
- Idempotence: ,
- De Morgan’s Laws: ,
Geometrically, negation flips the passage versus suppression of principal directions; conjunction isolates the intersection ellipsoid (shared directions), and disjunction creates the convex hull or union of the patterns' supported directions (Jaeger, 2014). Spectrum-wise, for commuting conceptors with , AND/OR operate elementwise:
4. Applications in Neural Network Architectures
Continual Representation Learning
In continual sentence representation learning, Boolean disjunction enables accumulation of all "common discourse directions" seen across sequential corpora. At stage , after processing a new corpus and computing its temporary conceptor , the running conceptor is updated by: This strategy preserves encoded knowledge from preceding corpora, in contrast to retraining from scratch, which exhibits catastrophic forgetting. Sentence embeddings are then formed by projecting new data away from the accumulated subspaces: Zero-shot modes—setting —utilize an initial conceptor computed on stop-word vectors alone. Empirically, this algorithm achieves significant stability in semantic textual similarity tasks and robust retention of prior knowledge (Liu et al., 2019).
Activation Steering in LLMs
Boolean operations on conceptors facilitate advanced activation engineering in transformer-based LLMs. Given conceptor matrices for distinct behaviors (e.g., "antonym" and "capitalize"), AND-combined conceptors () more robustly encode multiple steering constraints than additive combinations of steering vectors: Boolean operations allow for precise geometric control in activation space—intersections enforce simultaneous constraints, while unions aggregate features (Postmus et al., 9 Oct 2024). Empirically, AND-combined conceptors consistently outperform additive baselines on composite function tasks, even surpassing conceptors trained directly on the function's union in certain cases.
Neural Memory and Dynamical Pattern Manipulation
Boolean operations enable construction, intersection, and suppression of stored memory patterns within reservoir computing frameworks. For example, OR combines two stored patterns into a memory capable of replaying either, AND focuses on overlapping structure, and NOT supports targeted erasure (e.g., forgets pattern ) (Strock et al., 2020, Jaeger, 2014).
5. Examples and Implementation Notes
For diagonal conceptors , , the Boolean operators behave as follows (Jaeger, 2014):
| Operation | Formula | Result |
|---|---|---|
| Negation | ||
| Conjunction | ||
| Disjunction |
Efficient implementation requires careful treatment of pseudoinverses and low-rank cases. For large , SVD or eigendecomposition is recommended to manage numerical instabilities and regularization (Jaeger, 2014, Postmus et al., 9 Oct 2024). In high dimensions, Boolean operations are most stable when performed in a basis where all relevant conceptors are diagonal.
6. Empirical Impact and Limitations
Empirical studies demonstrate that Boolean conceptor operations:
- Improve stability and retention in continual learning for sentence representations, outperforming retrain-from-scratch methods susceptible to forgetting (Liu et al., 2019).
- Enable reliable compositional control in LLMs, with AND-combined conceptors achieving superior performance on composite activation steering tasks compared to additive or union-trained baselines (Postmus et al., 9 Oct 2024).
- Support memory retrieval, pattern blending, abstraction, and selective deletion within recurrent neural systems, providing a unified algebraic structure for cognitive manipulation (Strock et al., 2020, Jaeger, 2014).
Limitations include computational costs ( for matrix inversion), the need for adequate data samples to avoid rank-deficient covariance, and aperture hyperparameter tuning. Boolean operations assume compatible apertures and sufficient subspace overlap for invertibility.
7. Significance in Neural Representation and Abstraction
The Boolean algebra of conceptors extends classical logical calculus to high-dimensional, distributed neural representations. This framework enables rich programmatic manipulation of neural activations—merging, intersecting, abstracting, and suppressing patterns—while maintaining mathematical tractability. The lattice structure introduced by AND/OR supports compositionality and abstraction, with direct application to continual learning, memory management, and controllable generation in deep neural models (Jaeger, 2014, Postmus et al., 9 Oct 2024, Liu et al., 2019, Strock et al., 2020).