Neural Ternary Semiring: Triadic Reasoning
- Neural Ternary Semiring (NTS) is a differentiable framework that replaces binary operations with a native ternary operator to directly model subject-predicate-object interactions.
- It introduces two neural parameterizations—tensor-based fusion and attention-based aggregation—to effectively capture multi-entity dependencies and enable end-to-end learning.
- Algebraic regularizers enforcing distributivity and associativity improve model consistency and yield superior performance on knowledge graph completion and logical inference tasks.
The Neural Ternary Semiring (NTS) is a learnable, differentiable algebraic architecture for symbolic reasoning, underpinned by the mathematical formalism of ternary Gamma-semirings. Unlike classical semiring frameworks that support only binary composition, NTS allows direct representation of triadic (three-argument) interactions, thus aligning more naturally with tasks involving subject-predicate-object structures, logical rules with multiple premises, and other forms of multi-entity dependency. The core innovation is the replacement of binary products with a native neural ternary operator, learned and regularized so as to enforce approximate distributivity and associativity consistent with ternary semiring theory. This endows neural architectures with a principled mechanism for expressing and manipulating higher-arity relationships, providing a bridge between structured symbolic semantics and gradient-based learning (Gokavarapu et al., 21 Nov 2025).
1. Formal Definition of the Ternary Gamma-Semiring
A ternary Gamma-semiring is defined as follows. is a set with a commutative monoid structure , where is associative, commutative, and admits the neutral element $0$. For each —the set of context or relation-type indices—there is a ternary product . The operations satisfy, for all and :
- Distributivity in Each Slot:
(i) (ii) (iii)
- Ternary Associativity (Left-nested Form):
These axioms, together with the commutativity and zero laws of the underlying monoid, constitute the formal requirements for the ternary -semiring structure. The left-nested associativity suffices for well-posed multi-step composition, though further identities are possible (Gokavarapu et al., 21 Nov 2025).
2. Neural Parameterization of the Ternary Operator
Each element is embedded as , and each context index is associated with a learnable embedding . Two neural parameterizations are introduced for :
- Tensor-based Ternary Fusion:
Compute a tensor outer product , then perform the ternary operation via
where , , , and is a pointwise nonlinearity such as ReLU or .
- Attention-based Ternary Aggregation:
Assign each a vector . Define attention logits , for , and compute
Then aggregate .
Both parameterizations are fully differentiable and enable end-to-end learning (Gokavarapu et al., 21 Nov 2025).
3. Algebraic Regularization and Optimization
To encourage the learned ternary operator toward genuine ternary -semiring behavior, auxiliary regularization losses are applied:
- Associativity Regularizer:
- Distributivity Regularizer (enforced in each argument):
The overall loss for training combines the task-specific objective with these regularizers:
A soundness theorem establishes that, if the associativity and distributivity terms converge to zero, the learned operator exactly satisfies the ternary semiring axioms. This is shown by noting that the regularizers are squared residuals of the defining polynomial identities; their vanishing under a continuous parameterization yields pointwise satisfaction of the axioms (Gokavarapu et al., 21 Nov 2025).
4. Training Procedure and Hyperparameterization
NTS training proceeds as follows. All entity and relation-type embeddings are initialized randomly (uniform or Gaussian). For knowledge-graph triples , the composite embedding is obtained via with scoring . Two standard losses are supported:
- Negative log-likelihood over softmax-normalized scores:
- Margin ranking loss using negative sampling.
Hyperparameters are typically selected as , learning rate to , batch size $128$–$1024$, and early stopping on validation MRR with patience epochs. The Adam optimizer is used. At each iteration, a mixed mini-batch is drawn for task and regularization tuples, and the total loss is back-propagated (Gokavarapu et al., 21 Nov 2025).
5. Evaluation Protocols and Metrics
NTS is evaluated on triadic reasoning tasks including:
- Knowledge-graph completion (standard splits: FB15K-237, WN18RR),
- Curated logical rule templates ("if A and B then C"),
- Synthetic triadic constraint datasets with non-decomposable ground truth.
Metrics comprise mean reciprocal rank (MRR), Hits@ ( = 1, 3, 10), and Rule Satisfaction Rate for logical inference. Baseline comparisons are made with:
- Binary semiring models using sequential binary compositions,
- Translational and bilinear knowledge graph embeddings (TransE, DistMult, ComplEx),
- Neural Tensor Networks and trilinear models without algebraic regularization.
Ablation studies assess the effect of disabling associativity () or distributivity () losses (Gokavarapu et al., 21 Nov 2025).
6. Quantitative Outcomes and Empirical Patterns
Reported validation results indicate that NTS attains substantial improvements across datasets:
| Model | FB15K-237 MRR | Hits@10 | Rule Satisfaction Rate | Triadic Constraint Recovery |
|---|---|---|---|---|
| TransE | 0.68 | 0.80 | ~0.70 | 60–70% |
| ComplEx | — | 0.80 | — | — |
| Unreg. Neural-Tensor | 0.73 | — | — | — |
| NTS | 0.78 | 0.85 | ~0.88 | >95% |
- On FB15K‐237, NTS achieves an MRR of ~0.78 (cf. 0.68 for TransE, 0.73 for unregularized neural-tensor).
- Hits@10 rises to ~0.85 for NTS (vs. 0.76 for DistMult, 0.80 for ComplEx).
- On logical rule datasets, NTS attains Rule Satisfaction Rate ~0.88 (baselines plateau near 0.70).
- For synthetic, non-decomposable triadic constraints, NTS recovers >95% valid triples (binary models stagnate at 60–70%).
- Ablation results: setting reduces MRR by 0.05–0.07; reduces MRR by 0.03–0.06. Both regularizers are thus critical to the model’s performance and algebraic consistency.
This collectively suggests that native ternary composition enables more faithful modeling of irreducible, three-way relationships compared to cascaded binary products. Algebraic regularization improves generalization, particularly in sparse or rules-driven regimes (Gokavarapu et al., 21 Nov 2025).
7. Context and Significance
NTS addresses the inherent mismatch between binary algebraic tools and triadic phenomena pervasive in symbolic AI, such as knowledge graph reasoning and multi-premise logical inference. By supplying a learnable, regularized ternary operation, NTS supports direct encoding and manipulation of triadic dependencies, leading to superior performance on relevant benchmarks and stronger logical consistency. The algebraic regularization framework provides theoretical guarantees regarding the emergence of genuine ternary semiring structure in the learned operator. A plausible implication is that this framework could extend naturally to higher-arity algebraic settings and structured decision models, with the flexibility to encode complex, interpretable symbolic relationships within neural architectures (Gokavarapu et al., 21 Nov 2025).