Papers
Topics
Authors
Recent
2000 character limit reached

Parameterized Analogies: A Unified Framework

Updated 20 November 2025
  • Parameterized analogies are formal frameworks that generalize classical analogical proportions through mathematical parameterization of generalized means.
  • They unify analogical reasoning across discrete, continuous, and neural domains, enabling consistent inference in diverse applications.
  • The framework offers explicit error bounds and guarantees, facilitating advances in neural architectures, generative modeling, and cognitive inference.

Parameterized analogies are formal frameworks that generalize the classical notion of analogical proportion using mathematical parameters, often to unify analogical reasoning across numerical, symbolic, and learned representations. These frameworks offer a systematic approach for defining, solving, and leveraging analogical relationships in settings ranging from pure mathematics and formal logic to artificial intelligence, continuous regression, structured symbolic reasoning, and neural representation transfer.

1. Mathematical Foundations: Generalized Means and p-Analogies

Parameterized analogies formalize the concept of "analogical proportion" among tuples (typically quadruples) of numbers by parameterizing the analogy relation via generalized means. Specifically, given four strictly increasing positive real numbers a<b<c<da < b < c < d, they are said to be in analogy with respect to power pp (denoted a::pb::pc::pda ::^p b ::^p c ::^p d) if the pp-mean of the extremes equals the pp-mean of the inner values: Mp(a,d)=Mp(b,c)M_p(a,d) = M_p(b,c) where Mp(x,y)=(xp+yp2)1/pM_p(x,y) = \left(\frac{x^p + y^p}{2}\right)^{1/p} for p0p \neq 0, with the limits for p0p \to 0 corresponding to the geometric mean, and p±p \to \pm \infty to the maximum and minimum, respectively.

This condition reduces to the algebraic constraint

ap+dp=bp+cp(p0)a^p + d^p = b^p + c^p \quad (p \neq 0)

which always admits a unique solution in pp for strictly increasing tuples, owing to monotonicity properties and application of the Intermediate-Value Theorem. Any pp-analogy can be reduced to an arithmetic (p=1p=1) analogy via a monotonic transformation φp(x)=xp\varphi_p(x) = x^p, so

a::pb::pc::pd    φp(a)::1φp(b)::1φp(c)::1φp(d)a ::^p b ::^p c ::^p d \iff \varphi_p(a) ::^1 \varphi_p(b) ::^1 \varphi_p(c) ::^1 \varphi_p(d)

For p0p \to 0, φ0(x)=lnx\varphi_0(x) = \ln x is used, and the analogy becomes arithmetic in the log domain (Lepage et al., 26 Jul 2024).

The analogical equation Mp(a,b)=Mp(c,x)M_p(a,b) = M_p(c,x) admits the unique positive solution x=[ap+bpcp]1/px = [a^p + b^p - c^p]^{1/p}, and these formulations extend to complex numbers provided branch-point singularities are avoided (Lepage et al., 26 Jul 2024).

2. Unification of Boolean, Discrete, and Continuous Analogies

The generalized mean framework supports a smooth progression from classical Boolean or discrete analogies to continuous analogical inference. When restricted to the Boolean cube {0,1}n\{0,1\}^n and p=1p=1, the arithmetic analogy d=b+cad = b + c - a reproduces the classical minimal/Klein analogical proportion d=abcd = a \oplus b \oplus c (XOR-linear). Analogical inference is then sound for affine Boolean functions and approximately correct for those near such affine functions (Cunha et al., 13 Nov 2025).

For real-valued vectors, analogical proportion is defined componentwise: a:pb::pc:pd    mpi(ai,di)=mpi(bi,ci)i\mathbf{a} :^{\mathbf{p}} \mathbf{b} ::^{\mathbf{p}} \mathbf{c} :^{\mathbf{p}} \mathbf{d} \iff m_{p_i}(a_i, d_i) = m_{p_i}(b_i, c_i) \quad \forall i This generalizes direct analogical extension and supports regression and function inference over continuous domains. The parameter family subsumes classical analogies: p=+p=+\infty for max/min, p=1p=1 for arithmetic, p=0p=0 for geometric, and p=1p=-1 for harmonic analogies (Cunha et al., 13 Nov 2025).

3. Characterization of Analogy-Preserving Functions and Error Bounds

Functions suitable for analogical inference in the parameterized analogy framework are characterized explicitly. A continuous mapping f:R+nR+f : \mathbb{R}^n_+ \to \mathbb{R}_+ is analogy-preserving (for given powers p\mathbf{p} and qq) if and only if it takes the form: f(x1,,xn)=(j=1najxjpj+b)1/qf(x_1,\dots,x_n) = \left(\sum_{j=1}^n a_j x_j^{p_j} + b\right)^{1/q} for nonnegative aja_j and bb. These are "powered-linear" functions (Cunha et al., 13 Nov 2025). In Boolean domains (p=1p=1, q=1q=1), these are affine functions (XOR-linear).

Worst-case and average-case error bounds for analogical inference are available under smoothness assumptions. For functions ff close to analogy-preserving, the uniform inference error for any quadruple in analogical proportion is bounded in terms of the deviation from AP(p;q)AP_{(\mathbf{p};q)}, with both uniform (LL_\infty) and expected-case (probabilistic) bounds given explicitly (Cunha et al., 13 Nov 2025).

4. Parameterized Analogies in Relational and Graph-Based Models

Parameterized analogical reasoning has been embedded in probabilistic relational models to score the fit of new object pairs (A,B)(A,B) to a query set S\mathbf{S} of labeled pairs. For instance, in relational Bayesian sets (RBsets), pairs are embedded via feature maps XABRKX^{AB} \in \mathbb{R}^K, and a shared log-odds parameter Θ\Theta models the likelihood of a link.

The analogy score for (A,B)(A,B) relative to S\mathbf{S} is computed as a log Bayes factor: score(A,B)=logP(LAB=1XAB,S,LS=1)logP(LAB=1XAB)\mathrm{score}(A,B) = \log P(L^{AB}=1|X^{AB},\mathbf{S},L^{\mathbf{S}}=1) - \log P(L^{AB}=1|X^{AB}) where inference proceeds via variational Bayesian logistic regression (0912.5193). The framework encodes task-relevant analogy via parameterization and allows for robust analogical retrieval in information networks and biological interaction prediction.

Neural architectures for structural analogy, such as the Analogical Matching Network (AMN), parameterize analogical matching between symbolic DAGs using neural weights θ\theta. These encode analogical structure through graph embeddings, attention-based correspondence scoring, and pointer-network selection, enabling end-to-end learning of analogical reasoning consistent with the principles of Structure–Mapping Theory (Crouse et al., 2020).

5. Parameterized Analogies in Representation and Attribute Transfer

In machine perception and generative modeling, parameterized analogies guide transfer in high-dimensional learned representations. For example, in appearance transfer for neural radiance fields (NeRF Analogies), the analogy A:A::B:BA : A'::B:B' is operationalized by learning a radiance field LθL_\theta that transfers semantic appearance from a source field AA' to target geometry BB. Correspondences are established by maximizing cosine similarity between deep semantic features fjTf_j^T (from DiNO-ViT) of the target and fiSf_i^S of the source: ϕ(j)=argmaxisim(fjT,fiS)\phi(j) = \arg\max_i \,\operatorname{sim}(f_j^T, f_i^S) LθL_\theta is then supervised to match source colors at corresponding points, with multi-view consistency regularized over large sets of views. This parameterization enables the systematic exploration of the Cartesian product of geometry and appearance, and achieves state-of-the-art performance in multi-view consistent style and attribute transfer (Fischer et al., 13 Feb 2024).

6. Functional Analogy and Mathematical Generalization

Parameterized analogies admit further generalization in the context of mathematical identities and special functions. For example, analogies of Jacobi's formula relate classical theta function identities to functional equations for hypergeometric series via the Schwarz map and its inverse. Analogue identities of the form

ϑ00(τ)4+ϑ10(τ)4=F(1/6,1/2,1;ν(τ))2\vartheta_{00}(\tau)^4 + \vartheta_{10}(\tau)^4 = F(1/6,1/2,1;\nu(\tau))^2

provide functional analogies between modular forms and hypergeometric functions, highlighting the breadth of parameterized analogical structures beyond standard numerical domains (Matsumoto, 2022).

7. Practical Significance and Theoretical Impact

The parameterized analogy framework offers a principled, unifying perspective that extends analogical reasoning:

  • Across data types (from discrete/binary to real/complex-valued features)
  • Over model classes (from affine Boolean classifiers to continuous powered-linear regressors)
  • In neural architectures, as both explicit symmetry constraints (powered means) and implicit parameterizations (AMN, NeRF Analogy)
  • With formal guarantees: uniqueness of parameters, characterization of analogical extension domains, and explicit error bounds under deviation from analogy-preserving structure

This unified treatment enables robust, mathematically grounded analogy-based inference in artificial intelligence, cognitive modeling, algebraic analysis, generative modeling, and structured prediction, connecting classical symbolic analogy with the requirements of modern data-driven and vectorial representation domains (Lepage et al., 26 Jul 2024, Cunha et al., 13 Nov 2025, Crouse et al., 2020, Fischer et al., 13 Feb 2024).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Parameterized Analogies.