Papers
Topics
Authors
Recent
2000 character limit reached

Analogy-Preserving Functions

Updated 20 November 2025
  • Analogy-Preserving Functions are mathematical constructs defined via generalized means, where a parameter p uniquely governs analogical relations among ordered quadruples.
  • They unify discrete Boolean analogies with continuous models, providing a rigorous basis for inference in vector embeddings and powered-linear maps.
  • Applications span AI, Bayesian relational learning, and deep generative models, with provable error bounds and closed-form solutions for analogical inference.

A parameterized analogy refers to a formal framework wherein the concept of analogy is indexed by a real or complex parameter, typically controlling the family of means or transformations used to establish the analogical relationship between objects. This paradigm provides a unifying, mathematically rigorous account of analogical reasoning that encompasses classical Boolean models, continuous regression, functional data, and applications in representation learning and generative modeling.

1. Mathematical Foundations of Parameterized Analogies

The core mathematical principle behind parameterized analogies is the generalized mean of order pp, also known as the pp-mean or Hölder mean. For positive real numbers x,yx, y and parameter pRp \in \mathbb{R}, the pp-mean is defined as

mp(x,y)=(xp+yp2)1/p,p0,m_p(x, y) = \left( \frac{x^p + y^p}{2} \right)^{1/p}, \quad p \neq 0,

with the p0p \to 0 limit yielding the geometric mean xy\sqrt{xy}. It interpolates between the minimum (as pp \to -\infty), maximum (p+p \to +\infty), harmonic (p=1p = -1), geometric (p=0p = 0), and arithmetic (p=1p = 1) means.

Given a<b<c<da < b < c < d in R>0\mathbb{R}_{>0}, these four numbers are said to be in "analogy at power pp" (notation a::pb::pc::pda ::^p b ::^p c ::^p d) if and only if

mp(a,d)=mp(b,c),m_p(a, d) = m_p(b, c),

or equivalently,

ap+dp=bp+cp.a^p + d^p = b^p + c^p.

A unique pp always exists for positive, strictly ordered quadruples. This generalizes the classical arithmetic analogy (p=1p=1: a+d=b+ca+d=b+c), subsuming geometric, harmonic, and extreme value analogies as pp varies. Analogy equations for unknowns (e.g., xx solving mp(a,b)=mp(c,x)m_p(a,b)=m_p(c,x)) admit closed-form solutions for any pp:

x=[ap+bpcp]1/p.x = [a^p + b^p - c^p]^{1/p}.

These formulations smoothly extend to complex-valued tuples away from algebraic singularities (Lepage et al., 26 Jul 2024).

2. Unification of Discrete and Continuous Analogical Models

Generalizing analogical proportion from Boolean to continuous domains is achieved by parameterizing the notion of analogy as above. For vectors in R+n\mathbb{R}_{+}^n, and a (possibly componentwise) power vector p=(p1,...,pn)\mathbf{p} = (p_1, ..., p_n),

a::pb::pc::pdi,mpi(ai,di)=mpi(bi,ci).\mathbf{a} ::^{\mathbf{p}} \mathbf{b} ::^{\mathbf{p}} \mathbf{c} ::^{\mathbf{p}} \mathbf{d} \Longleftrightarrow \forall i,\quad m_{p_i}(a_i, d_i)=m_{p_i}(b_i, c_i).

When pi=1p_i=1, Boolean analogy is recovered (Klein four-group or minimal models); for pi=0p_i=0 the geometric analogy (ad=bcad=bc) appears; in the pp\to\infty limit, extreme value analogies emerge. This parameterization unifies discrete affine classifiers and continuous regression, as proven formally in (Cunha et al., 13 Nov 2025).

A comprehensive characterization stipulates that the set of analogy-preserving functions for p,q\mathbf{p}, q (i.e., functions ff such that f(a):qf(b)::qf(c):qf(d)f(\mathbf{a}):^q f(\mathbf{b}) ::^q f(\mathbf{c}):^q f(\mathbf{d}) whenever a,b,c,d\mathbf{a}, \mathbf{b}, \mathbf{c}, \mathbf{d} are in parameterized analogy) consists precisely of powered-linear maps:

f(x1,...,xn)=(j=1najxjpj+b)1/q,aj,b0,f(x_1, ..., x_n) = \left(\sum_{j=1}^n a_j x_j^{p_j} + b \right)^{1/q}, \quad a_j, b \ge 0,

i.e., linear, polynomial, or monomial forms depending on the power vector (Cunha et al., 13 Nov 2025).

3. Parameterized Analogies in AI: Representation, Inference, and Embeddings

Parameterized analogy has emerged as a critical tool for both inferring new data points and evaluating latent representations. In the context of AI, embeddings (vectors, matrices, or tensors) often encode semantic structure. The requirement that quadruples of embedded points support analogical inference for some pp provides both a geometric and algebraic criterion for the evaluation or design of such embeddings (Lepage et al., 26 Jul 2024).

In machine learning, analogical inference in the embedding space often seeks to ensure that for a,b,c\mathbf{a}, \mathbf{b}, \mathbf{c}, the solution to mp(a,b)=mp(c,x)m_p(\mathbf{a}, \mathbf{b})=m_p(\mathbf{c}, x) produces xx lying "semantically" at the appropriate point (e.g., for analogies like "king:man::queen:woman" in word embeddings). The explicit parameterization facilitates sound inference and supports closed-form solutions for the missing element, crucial for analogy-driven data augmentation and structured prediction.

Furthermore, theoretical work shows that the worst-case and average-case error rates for analogy-based inference are tightly controlled by the function's distance to the space of powered-linear maps, providing provable guarantees (Cunha et al., 13 Nov 2025).

4. Learning and Ranking Parameterized Analogies in Relational Data

Parameterized analogies serve as the foundation for analogical ranking and relational learning in multivariate, networked data settings. Learning whether a new relation (A,B)(A,B) is analogous to a set SS of example relations is formulated as a Bayesian comparison of predictive probabilities based on learned parameter vectors Θ\Theta. Each relation is mapped to a vector via a feature embedding Φ(A,B)\Phi(A,B), and the Bayesian posterior is updated for the observed query set SS (0912.5193).

The scoring function is

score(A,B)=logP(LAB=1XAB,S)logP(LAB=1XAB),\text{score}(A,B) = \log P(L^{AB}=1|X^{AB}, S) - \log P(L^{AB}=1|X^{AB}),

where P(LAB=1XAB,S)P(L^{AB}=1|X^{AB}, S) is the posterior predictive under parameters fit to SS. Although this framework is not tied to a specific power parameter, it is an instance of parameterizing the analogy criterion with respect to model parameters, embedding the general principle of parameterized analogical fit (0912.5193).

5. Parameterized Analogies in Deep Generative and Symbolic Models

Parameterized analogies have been operationalized in both deep neural and symbolic generative modeling contexts.

  • Visual domain: In NeRFs, a parameterized analogy between NeRFs A:A::B:BA: A' :: B: B' is instantiated by learning a mapping (via a neural network) that transfers appearance from AA' to BB', conditioned on semantic correspondence in an embedding space. Here, the parameterization occurs through neural architecture weights, loss functions, and attention mechanisms, not an explicit scalar pp, but the workflow formalizes analogy as an isomorphism in semantic feature space (Fischer et al., 13 Feb 2024). The approach notably outperforms both traditional and 3D-aware baselines in terms of consistency and perceptual preference.
  • Symbolic reasoning: Neural Analogical Matching Networks (AMN) learn to produce analogies respecting cognitive SMT principles using parameterized modules such as DAG-LSTM encoders and pointer-decoding transformers. The learned parameters θ\theta enforce analogical constraints (one-to-one, parallel connectivity, systematicity) as soft preferences. AMN achieves SME-comparable performance without explicit hand-coded rules (Crouse et al., 2020).

6. Connections to Classical Mathematical and Functional Identities

The methodology of parameterized analogies also resonates in mathematical analysis, particularly in the context of functional equations and identities. Generalized analogies of Jacobi's formula, as established via the Schwarz map and hypergeometric equations, yield a plethora of parameterized relations among special functions—for example, using analogies with different hypergeometric parameters to establish transformations among theta series and Eisenstein series (Matsumoto, 2022). Here, the role of the parameter is analytically explicit, embedded in the variation of function arguments and mapping properties.

7. Implications, Extensions, and Theoretical Significance

The theory of parameterized analogies provides a one-parameter (or vector-parameter) family that unifies all classical analogical frameworks—arithmetic, geometric, harmonic, extreme value, and Boolean. Every quadruple of ordered positive real numbers admits a unique analogy power, and every powered analogy can be reduced to the arithmetic form via monotonic transformation. The framework operates uniformly over real and complex domains, and extends directly to vector-valued data and functional inference.

This approach also admits explicit error bounds for analogy-based inference, fully characterizes analogy-preserving functions, and connects learning-based parameterizations in neural and Bayesian models to their analytic roots (Lepage et al., 26 Jul 2024, Cunha et al., 13 Nov 2025). It thereby furnishes both a mathematical foundation and practical route for analogical reasoning, supporting robust applications in machine learning, representation science, and mathematical analysis.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Analogy-Preserving Functions.