Papers
Topics
Authors
Recent
2000 character limit reached

Boolean Operations on Conceptors

Updated 11 December 2025
  • Boolean operations on conceptors are logical functions (AND, OR, NOT) applied to soft projection matrices that capture characteristic neural subspaces.
  • These operations maintain positive semidefiniteness and eigenvalue constraints while extending classical Boolean algebra into nonlinear, dynamic neural representations.
  • Applications include continual learning, neural memory modeling, and activation steering in large language models, enhancing model stability and performance.

A conceptor is a symmetric positive semidefinite matrix with eigenvalues in [0,1][0,1], functioning as a soft projection operator onto the characteristic linear subspace of a dataset or neural activation pattern. Boolean operations on conceptors—negation (NOT), conjunction (AND), and disjunction (OR)—systematically extend logical calculus into the nonlinear, dynamical setting of neural representations and abstract vector spaces. These operations have been foundational in recurrent neural network dynamics (Jaeger, 2014), continual representation learning (Liu et al., 2019), neural memory modeling (Strock et al., 2020), and activation engineering in LLMs (Postmus et al., 9 Oct 2024).

1. Definition of Conceptor Matrices

Given a collection of vectors {xi}RN\{x_i\} \subset \mathbb{R}^N, the corresponding conceptor CRN×NC \in \mathbb{R}^{N \times N} minimizes the regularized reconstruction loss: 1ni=1nxiCxi22+α2CF2,\frac{1}{n}\sum_{i=1}^n \|x_i - C x_i\|_2^2 + \alpha^{-2}\|C\|_F^2, where α>0\alpha > 0 is the aperture parameter controlling the subspace's width. The closed-form solution is: C=R(R+α2I)1,where  R=1nXX,C = R (R + \alpha^{-2}I)^{-1}, \quad \text{where} \; R = \frac{1}{n}X X^\top, with X=[x1,...,xn]X = [x_1, ..., x_n]. The eigenvalues of CC satisfy 0μi<10 \leq \mu_i < 1, ensuring that CC is a soft, rather than hard, projector. In spectral terms, CC shrinks each principal direction according to the data variance and aperture.

2. Formalism of Boolean Operations

Boolean operations on conceptor matrices closely parallel standard set algebra, mapping geometric and logical relationships in activation space to closed algebraic forms.

Negation (NOT)

¬C=IC\neg C = I - C

This operator captures the orthogonal complement to the subspace encoded by CC, softly projecting onto directions not present in the original manifold (Liu et al., 2019, Jaeger, 2014, Jaeger, 2014, Postmus et al., 9 Oct 2024).

Conjunction (AND)

C1C2=(C11+C21I)1C_1 \wedge C_2 = (C_1^{-1} + C_2^{-1} - I)^{-1}

C1C2C_1 \wedge C_2 projects onto components simultaneously present in C1C_1 and C2C_2, corresponding to the intersection of their ellipsoidal regions (Jaeger, 2014). The Moore–Penrose pseudoinverse is applied when CiC_i is singular.

Disjunction (OR)

Multiple mathematically equivalent definitions are recognized: C1C2=¬(¬C1¬C2)=I[(IC1)1+(IC2)1I]1C_1 \vee C_2 = \neg(\neg C_1 \wedge \neg C_2) = I - \left[(I-C_1)^{-1} + (I-C_2)^{-1} - I\right]^{-1} Alternately, for data sharing principal axes, the union formula becomes: C1C2=C1+C2(C1C2)C_1 \vee C_2 = C_1 + C_2 - (C_1 \wedge C_2) OR spans all directions captured by either conceptor, yielding the union ellipsoid (Jaeger, 2014, Liu et al., 2019). These operations preserve positive semidefiniteness and ensure eigenvalues remain in [0,1][0,1].

3. Algebraic Properties and Geometric Intuition

Boolean operations on conceptors form an algebra satisfying most classical Boolean identities, modulo the non-commutativity and dimensionality constraints inherent in matrix algebra (Jaeger, 2014, Jaeger, 2014):

  • Commutativity: C1C2=C2C1,    C1C2=C2C1C_1 \wedge C_2 = C_2 \wedge C_1, \;\; C_1 \vee C_2 = C_2 \vee C_1
  • Associativity: (C1C2)C3=C1(C2C3)(C_1 \wedge C_2) \wedge C_3 = C_1 \wedge (C_2 \wedge C_3), analogously for \vee
  • Absorption: C1(C1C2)=C1C_1 \vee (C_1 \wedge C_2) = C_1, C1(C1C2)=C1C_1 \wedge (C_1 \vee C_2) = C_1
  • Idempotence: CC=CC \wedge C = C, CC=CC \vee C = C
  • De Morgan’s Laws: ¬(C1C2)=¬C1¬C2\neg(C_1 \wedge C_2) = \neg C_1 \vee \neg C_2, ¬(C1C2)=¬C1¬C2\neg(C_1 \vee C_2) = \neg C_1 \wedge \neg C_2

Geometrically, negation flips the passage versus suppression of principal directions; conjunction isolates the intersection ellipsoid (shared directions), and disjunction creates the convex hull or union of the patterns' supported directions (Jaeger, 2014). Spectrum-wise, for commuting conceptors with diag(si),diag(ti)\text{diag}(s_i), \text{diag}(t_i), AND/OR operate elementwise: si(CD)=sitisi+tisiti,si(CD)=si+tisitis_i(C \wedge D) = \frac{s_i t_i}{s_i + t_i - s_i t_i}, \quad s_i(C \vee D) = s_i + t_i - s_i t_i

4. Applications in Neural Network Architectures

Continual Representation Learning

In continual sentence representation learning, Boolean disjunction enables accumulation of all "common discourse directions" seen across sequential corpora. At stage ii, after processing a new corpus DiD^i and computing its temporary conceptor CtempC^{\mathrm{temp}}, the running conceptor is updated by: Ci=CtempCi1C^i = C^{\mathrm{temp}} \vee C^{i-1} This strategy preserves encoded knowledge from preceding corpora, in contrast to retraining from scratch, which exhibits catastrophic forgetting. Sentence embeddings are then formed by projecting new data away from the accumulated subspaces: fs=qsCMqsf_s = q_s - C^M q_s Zero-shot modes—setting M=0M=0—utilize an initial conceptor computed on stop-word vectors alone. Empirically, this algorithm achieves significant stability in semantic textual similarity tasks and robust retention of prior knowledge (Liu et al., 2019).

Activation Steering in LLMs

Boolean operations on conceptors facilitate advanced activation engineering in transformer-based LLMs. Given conceptor matrices C1,C2C_1, C_2 for distinct behaviors (e.g., "antonym" and "capitalize"), AND-combined conceptors (C1C2{C_1 \wedge C_2}) more robustly encode multiple steering constraints than additive combinations of steering vectors: h=βc(C(combined)h)h'_\ell = \beta_c (C^{\mathrm{(combined)}} h_\ell) Boolean operations allow for precise geometric control in activation space—intersections enforce simultaneous constraints, while unions aggregate features (Postmus et al., 9 Oct 2024). Empirically, AND-combined conceptors consistently outperform additive baselines on composite function tasks, even surpassing conceptors trained directly on the function's union in certain cases.

Neural Memory and Dynamical Pattern Manipulation

Boolean operations enable construction, intersection, and suppression of stored memory patterns within reservoir computing frameworks. For example, OR combines two stored patterns into a memory capable of replaying either, AND focuses on overlapping structure, and NOT supports targeted erasure (e.g., CC¬Cv1C \leftarrow C \wedge \neg C_{v_1} forgets pattern v1v_1) (Strock et al., 2020, Jaeger, 2014).

5. Examples and Implementation Notes

For diagonal conceptors C=diag(0.8,0.2)C = \mathrm{diag}(0.8, 0.2), D=diag(0.5,0.7)D = \mathrm{diag}(0.5, 0.7), the Boolean operators behave as follows (Jaeger, 2014):

Operation Formula Result
Negation ICI - C diag(0.2,0.8)\mathrm{diag}(0.2, 0.8)
Conjunction (C1+D1I)1(C^{-1} + D^{-1} - I)^{-1} diag(0.44,0.18)\approx \mathrm{diag}(0.44, 0.18)
Disjunction I((IC)1+(ID)1I)1I - ((I-C)^{-1} + (I-D)^{-1} - I)^{-1} diag(0.83,0.72)\approx \mathrm{diag}(0.83, 0.72)

Efficient implementation requires careful treatment of pseudoinverses and low-rank cases. For large NN, SVD or eigendecomposition is recommended to manage numerical instabilities and regularization (Jaeger, 2014, Postmus et al., 9 Oct 2024). In high dimensions, Boolean operations are most stable when performed in a basis where all relevant conceptors are diagonal.

6. Empirical Impact and Limitations

Empirical studies demonstrate that Boolean conceptor operations:

  • Improve stability and retention in continual learning for sentence representations, outperforming retrain-from-scratch methods susceptible to forgetting (Liu et al., 2019).
  • Enable reliable compositional control in LLMs, with AND-combined conceptors achieving superior performance on composite activation steering tasks compared to additive or union-trained baselines (Postmus et al., 9 Oct 2024).
  • Support memory retrieval, pattern blending, abstraction, and selective deletion within recurrent neural systems, providing a unified algebraic structure for cognitive manipulation (Strock et al., 2020, Jaeger, 2014).

Limitations include computational costs (O(N3)O(N^3) for matrix inversion), the need for adequate data samples to avoid rank-deficient covariance, and aperture hyperparameter tuning. Boolean operations assume compatible apertures and sufficient subspace overlap for invertibility.

7. Significance in Neural Representation and Abstraction

The Boolean algebra of conceptors extends classical logical calculus to high-dimensional, distributed neural representations. This framework enables rich programmatic manipulation of neural activations—merging, intersecting, abstracting, and suppressing patterns—while maintaining mathematical tractability. The lattice structure introduced by AND/OR supports compositionality and abstraction, with direct application to continual learning, memory management, and controllable generation in deep neural models (Jaeger, 2014, Postmus et al., 9 Oct 2024, Liu et al., 2019, Strock et al., 2020).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Boolean Operations on Conceptors.