Analogy-Preserving Functions
- Analogy-Preserving Functions are mathematical constructs defined via generalized means, where a parameter p uniquely governs analogical relations among ordered quadruples.
- They unify discrete Boolean analogies with continuous models, providing a rigorous basis for inference in vector embeddings and powered-linear maps.
- Applications span AI, Bayesian relational learning, and deep generative models, with provable error bounds and closed-form solutions for analogical inference.
A parameterized analogy refers to a formal framework wherein the concept of analogy is indexed by a real or complex parameter, typically controlling the family of means or transformations used to establish the analogical relationship between objects. This paradigm provides a unifying, mathematically rigorous account of analogical reasoning that encompasses classical Boolean models, continuous regression, functional data, and applications in representation learning and generative modeling.
1. Mathematical Foundations of Parameterized Analogies
The core mathematical principle behind parameterized analogies is the generalized mean of order , also known as the -mean or Hölder mean. For positive real numbers and parameter , the -mean is defined as
with the limit yielding the geometric mean . It interpolates between the minimum (as ), maximum (), harmonic (), geometric (), and arithmetic () means.
Given in , these four numbers are said to be in "analogy at power " (notation ) if and only if
or equivalently,
A unique always exists for positive, strictly ordered quadruples. This generalizes the classical arithmetic analogy (: ), subsuming geometric, harmonic, and extreme value analogies as varies. Analogy equations for unknowns (e.g., solving ) admit closed-form solutions for any :
These formulations smoothly extend to complex-valued tuples away from algebraic singularities (Lepage et al., 26 Jul 2024).
2. Unification of Discrete and Continuous Analogical Models
Generalizing analogical proportion from Boolean to continuous domains is achieved by parameterizing the notion of analogy as above. For vectors in , and a (possibly componentwise) power vector ,
When , Boolean analogy is recovered (Klein four-group or minimal models); for the geometric analogy () appears; in the limit, extreme value analogies emerge. This parameterization unifies discrete affine classifiers and continuous regression, as proven formally in (Cunha et al., 13 Nov 2025).
A comprehensive characterization stipulates that the set of analogy-preserving functions for (i.e., functions such that whenever are in parameterized analogy) consists precisely of powered-linear maps:
i.e., linear, polynomial, or monomial forms depending on the power vector (Cunha et al., 13 Nov 2025).
3. Parameterized Analogies in AI: Representation, Inference, and Embeddings
Parameterized analogy has emerged as a critical tool for both inferring new data points and evaluating latent representations. In the context of AI, embeddings (vectors, matrices, or tensors) often encode semantic structure. The requirement that quadruples of embedded points support analogical inference for some provides both a geometric and algebraic criterion for the evaluation or design of such embeddings (Lepage et al., 26 Jul 2024).
In machine learning, analogical inference in the embedding space often seeks to ensure that for , the solution to produces lying "semantically" at the appropriate point (e.g., for analogies like "king:man::queen:woman" in word embeddings). The explicit parameterization facilitates sound inference and supports closed-form solutions for the missing element, crucial for analogy-driven data augmentation and structured prediction.
Furthermore, theoretical work shows that the worst-case and average-case error rates for analogy-based inference are tightly controlled by the function's distance to the space of powered-linear maps, providing provable guarantees (Cunha et al., 13 Nov 2025).
4. Learning and Ranking Parameterized Analogies in Relational Data
Parameterized analogies serve as the foundation for analogical ranking and relational learning in multivariate, networked data settings. Learning whether a new relation is analogous to a set of example relations is formulated as a Bayesian comparison of predictive probabilities based on learned parameter vectors . Each relation is mapped to a vector via a feature embedding , and the Bayesian posterior is updated for the observed query set (0912.5193).
The scoring function is
where is the posterior predictive under parameters fit to . Although this framework is not tied to a specific power parameter, it is an instance of parameterizing the analogy criterion with respect to model parameters, embedding the general principle of parameterized analogical fit (0912.5193).
5. Parameterized Analogies in Deep Generative and Symbolic Models
Parameterized analogies have been operationalized in both deep neural and symbolic generative modeling contexts.
- Visual domain: In NeRFs, a parameterized analogy between NeRFs is instantiated by learning a mapping (via a neural network) that transfers appearance from to , conditioned on semantic correspondence in an embedding space. Here, the parameterization occurs through neural architecture weights, loss functions, and attention mechanisms, not an explicit scalar , but the workflow formalizes analogy as an isomorphism in semantic feature space (Fischer et al., 13 Feb 2024). The approach notably outperforms both traditional and 3D-aware baselines in terms of consistency and perceptual preference.
- Symbolic reasoning: Neural Analogical Matching Networks (AMN) learn to produce analogies respecting cognitive SMT principles using parameterized modules such as DAG-LSTM encoders and pointer-decoding transformers. The learned parameters enforce analogical constraints (one-to-one, parallel connectivity, systematicity) as soft preferences. AMN achieves SME-comparable performance without explicit hand-coded rules (Crouse et al., 2020).
6. Connections to Classical Mathematical and Functional Identities
The methodology of parameterized analogies also resonates in mathematical analysis, particularly in the context of functional equations and identities. Generalized analogies of Jacobi's formula, as established via the Schwarz map and hypergeometric equations, yield a plethora of parameterized relations among special functions—for example, using analogies with different hypergeometric parameters to establish transformations among theta series and Eisenstein series (Matsumoto, 2022). Here, the role of the parameter is analytically explicit, embedded in the variation of function arguments and mapping properties.
7. Implications, Extensions, and Theoretical Significance
The theory of parameterized analogies provides a one-parameter (or vector-parameter) family that unifies all classical analogical frameworks—arithmetic, geometric, harmonic, extreme value, and Boolean. Every quadruple of ordered positive real numbers admits a unique analogy power, and every powered analogy can be reduced to the arithmetic form via monotonic transformation. The framework operates uniformly over real and complex domains, and extends directly to vector-valued data and functional inference.
This approach also admits explicit error bounds for analogy-based inference, fully characterizes analogy-preserving functions, and connects learning-based parameterizations in neural and Bayesian models to their analytic roots (Lepage et al., 26 Jul 2024, Cunha et al., 13 Nov 2025). It thereby furnishes both a mathematical foundation and practical route for analogical reasoning, supporting robust applications in machine learning, representation science, and mathematical analysis.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free