Papers
Topics
Authors
Recent
2000 character limit reached

Factorization-Machine QUBO Reductions

Updated 12 December 2025
  • FMQA is a computational framework that maps factorization machine surrogates to QUBO formulations, enabling efficient optimization via quantum and classical annealing.
  • The methodology integrates precise algebraic QUBO mapping, systematic constraint handling, and iterative surrogate refinement to accelerate black-box combinatorial optimization.
  • Empirical results in materials science and engineering demonstrate FMQA’s scalability, sample efficiency, and reduced computational overhead compared to classical approaches.

Factorization-Machine–Assisted QUBO Reductions (FMQA) is a computational paradigm that integrates factorization machine surrogates with quantum or classical Ising machines to accelerate black-box combinatorial optimization. FMQA enables the translation of high-dimensional, expensive objective functions—commonly arising in materials design, chemical discovery, and engineering topology optimization—into efficient Quadratic Unconstrained Binary Optimization (QUBO) problems that are tractable for annealing-based hardware and large-scale solvers. The method’s core lies in the precise algebraic mapping of factorization machine surrogates into QUBO form, systematic handling of constraints, and iterative surrogate refinement through black-box sampling and retraining.

1. Mathematical Foundation: FM Surrogates and QUBO Mapping

The FMQA framework relies on training a second-order factorization machine model on a set of binary-encoded candidate solutions with their corresponding black-box objective values. Given binary variables x=(x1,...,xn){0,1}nx = (x_1, ..., x_n)^\top \in \{0,1\}^n, the FM is defined as

y^(x)=w0+i=1nwixi+1i<jnvi,vjxixj\hat{y}(x) = w_0 + \sum_{i=1}^n w_i x_i + \sum_{1 \leq i < j \leq n} \langle v_i, v_j \rangle x_i x_j

where w0w_0 is a global bias, wiw_i are linear coefficients, and viRKv_i \in \mathbb{R}^K are latent factor vectors for each variable. The pairwise interaction term vi,vj\langle v_i, v_j \rangle captures second-order feature dependencies via a low-rank approximation, enabling expressivity for modeling combinatorial landscapes using O(nK)\mathcal{O}(n K) parameters.

The surrogate prediction is directly mapped into QUBO form by setting

Qii=wi,Qij=vi,vj for i<jQ_{ii} = w_i, \quad Q_{ij} = \langle v_i, v_j \rangle \ \text{for} \ i < j

EQUBO(x)=xQx+constE_{\text{QUBO}}(x) = x^\top Q x + \text{const}

This correspondence ensures exact translation of the FM hypothesis into a binary quadratic objective, preserving both linear and quadratic interactions. Constraints such as one-hot, cardinality, or higher-order conditions are integrated via additional penalty terms (e.g., α(iSxic)2\alpha(\sum_{i \in S} x_i - c)^2), which are absorbed into diagonal and off-diagonal elements of QQ (Tamura et al., 24 Jul 2025, Wilson et al., 2021, Matsumori et al., 2022, Couzinie et al., 7 Aug 2024).

2. FMQA Optimization Workflow

The FMQA method consists of a sequential surrogate-guided optimization loop:

  1. Data Acquisition: Begin with an initial dataset D={(x(m),y(m))}m=1M0D = \{ (x^{(m)}, y^{(m)}) \}_{m=1}^{M_0}, where y(m)=f(x(m))y^{(m)} = f(x^{(m)}) from black-box evaluations.
  2. FM Surrogate Training: Fit the FM by minimizing L(w0,w,V)=(x,y)D[y^(x)y]2L(w_0, w, V) = \sum_{(x, y) \in D} [\hat{y}(x) - y]^2, optionally with L2L_2 regularization.
  3. QUBO Construction: Extract QiiQ_{ii} and QijQ_{ij} as above; add constraint penalties if required.
  4. Annealing Query: Minimize EQUBO(x)E_{\text{QUBO}}(x) using a QUBO solver—quantum annealer, digital annealer, or simulated annealing device—to obtain promising candidates xnewx^{\text{new}}.
  5. Evaluation and Augmentation: Evaluate f(xnew)f(x^{\text{new}}), add (xnew,f(xnew))(x^{\text{new}}, f(x^{\text{new}})) to DD.
  6. Iteration: Repeat steps 2–5 for a fixed number of rounds or until convergence criteria such as stagnation or objective improvement threshold are met.

A prototypical pseudocode for the black-box optimization setting is:

1
2
3
4
5
6
7
8
9
initialize dataset D = { (x^(m), y^(m)) for m in 1...M0 }
for t in 1...T:
    fit FM surrogate (w0, w, V) on D
    build QUBO: Q_ii = w_i, Q_ij = dot(v_i, v_j)
    x_new = QUBO_solver(Q)
    if x_new already in D: randomize or re-sample
    y_new = f(x_new)
    D  D  { (x_new, y_new) }
return best x in D
(Tamura et al., 24 Jul 2025, Couzinie et al., 7 Aug 2024)

This iterative loop provides a surrogate-accelerated alternative to classical, sample-inefficient approaches, using the annealer as a dedicated optimizer for the surrogate’s quadratic surface.

3. Constraint Handling and Penalty Formulation

FMQA provides a direct algebraic recipe for encoding a variety of combinatorial and structural constraints into the QUBO matrix. Crucial patterns include:

  • Fixed-Cardinality (choose-K):

α(ixiK)2\alpha \left(\sum_{i} x_i - K \right)^2

Expands to linear and quadratic penalties absorbed into QiiQ_{ii} and QijQ_{ij}, strongly discouraging solutions outside cardinality KK (Matsumori et al., 2022, Tamura et al., 24 Jul 2025).

  • One-Hot/Mutual Exclusion:

α(iSxi1)2\alpha\left(\sum_{i \in S} x_i - 1\right)^2

Ensures “exactly one” selection within set SS by penalizing any non-compliant binary patterns (Endo et al., 5 Jul 2024).

  • Structural/Domain-Specific: Additional graph constraints, adjacency terms, or side information may be quadratized and incorporated in the same manner, enabling FMQA to adapt to domain-specific restrictions without introducing auxiliary variables or exponential constraint-logic terms.

Penalty strengths α\alpha are chosen large compared to the typical absolute value of QijQ_{ij} entries to ensure feasibility, particularly when using hardware annealers with limited precision (Endo et al., 5 Jul 2024, Tamura et al., 24 Jul 2025).

4. FM Initialization, Function Smoothing, and Numerical Stability

Proper initialization of FM parameters critically affects optimization convergence, especially when using warm-starts or when the initial quadratic surrogate should closely match a known approximate Hamiltonian. Low-rank initialization via eigen-decomposition of a reference interaction matrix JJ and projection onto a KK-rank subspace provides a theoretically justified and empirically robust starting point. This procedure minimizes the Frobenius norm error between the true and FM-implied coupling matrices and was systematically analyzed using random matrix theory to predict effective rank for target tolerances (Seki et al., 16 Oct 2024). Warm-starts reduce the number of sample–train–sample cycles required to reach optimality.

Function surface “noise”—random parameter drift in regions never sampled—can lead to pathologically rough QUBO landscapes and degraded annealing performance, particularly with high-dimensional discrete binary encodings of continuous variables. Graph-Laplacian function smoothing regularization,

R(Θ)=(p,q)A[(bpbq)2+vpvq22]R(\Theta) = \sum_{(p,q) \in A}\left[(b_p - b_q)^2 + \|v_p - v_q\|_2^2\right]

where AA denotes pairs of adjacent bits in encoding blocks, is added to the FM loss to enforce local smoothness and propagate informative gradients to all model parameters. This regularization restores smooth descent and substantially accelerates convergence, effectively halving the number of expensive black-box evaluations required to reach near-optimal solutions in practical test cases (Endo et al., 5 Jul 2024).

5. Computational Complexity and Hardware Scalability

FM training scales as O(MnK)\mathcal{O}(M n K) per epoch, where MM is the sample count, nn is the number of binary variables, and KK is the FM rank. Quadratic matrix formation for QUBO mapping requires O(n2K)\mathcal{O}(n^2 K) operations. Hardware QUBO solvers (quantum annealers, digital annealers) provide (nearly) constant wall-clock solution times for fixed nn, while simulated annealing and classical heuristics scale mildly with nn. This yields an end-to-end pipeline in which the computational bottleneck is exported from the CPU (classical surrogate search) to specialized hardware. Compared to Bayesian optimization with Gaussian processes (O(M3)\mathcal{O}(M^3) training, NP-hard acquisition maximization), FMQA achieves both linear-complexity retraining and hardware acceleration for QUBO minimization (Tamura et al., 24 Jul 2025, Wilson et al., 2021).

Tables summarize complexity scaling:

Step Complexity
FM Training O(MnK)\mathcal{O}(M n K)
QUBO Matrix Assembly O(n2K)\mathcal{O}(n^2 K)
Annealer QUBO Solve O(1)O(1) (annealer), O(n)\approx \mathcal{O}(n) (sim. anneal)

Embedding limitations on current quantum hardware (e.g., D-Wave’s 180-bit Pegasus clique) cap the practical nn; hybrid and decomposition methods extend applicability to larger problems (Wilson et al., 2021).

6. Empirical Performance and Application Domains

FMQA demonstrates empirically robust sample efficiency and solution quality across diverse application domains:

  • Binary combinatorial optimization: FMQA consistently achieves 3–10× reduction in black-box calls compared to random or genetic algorithms for physics-inspired design tasks, e.g., radiative cooling metamaterials (24–48 bits), and vehicle body multiobjective optimization (up to 1000\sim1000 bits) (Tamura et al., 24 Jul 2025).
  • Constrained integer optimization: FMQA effectively encodes both variable cardinality and one-hot constraints for materials structure search, laser design, and traffic signaling, demonstrating accelerated convergence over evolutionary and classical surrogate approaches (Tamura et al., 24 Jul 2025, Matsumori et al., 2022).
  • Continuous/Latent space design: FMQA integrates with binary VAEs to compress images, molecules, or strings into binary latent vectors, dramatically reducing search complexity for topology, chemical, and high-dimensional graph design (Wilson et al., 2021).
  • Materials science and structure prediction: FMQA enables efficient crystal structure prediction, with typical ground-state sampling requiring only $7$–$18$ queries per run versus $100$–$1000$ for direct (vanilla) annealing (Couzinie et al., 7 Aug 2024). FMQA recovers local-minimum ordinal relationships with high fidelity (Kendall’s τ\tau approaching unity for simple potentials).

Empirical results highlight FMQA’s scalability and sample efficiency, with quantum annealers or hybrid samplers providing superior wall-clock times when QUBO sizes match hardware embed-ability.

7. Theoretical Analysis, Implications, and Extensions

Random matrix theory enables explicit prediction of effective FM rank requirements for a desired approximation error, facilitating principled choice of FM complexity and providing guidelines for initializing surrogates close to the true Hamiltonian (Seki et al., 16 Oct 2024). Smoothing regularization and systematic penalty scaling mitigate pathological QUBO landscapes and address the curse of dimensionality in binary encoding schemes (Endo et al., 5 Jul 2024).

A plausible implication is that as Ising hardware and quantum annealers grow in connectivity and precision, FMQA will provide a general-purpose, high-throughput strategy for black-box combinatorial and mixed-integer optimization, reducing the role of expensive hand-engineered heuristics and enabling integration of complex domain-specific constraints via straightforward quadratic penalty embedding.

References

  • (Tamura et al., 24 Jul 2025) Black-box optimization using factorization and Ising machines
  • (Seki et al., 16 Oct 2024) Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
  • (Endo et al., 5 Jul 2024) Function Smoothing Regularization for Precision Factorization Machine Annealing in Continuous Variable Optimization Problems
  • (Couzinie et al., 7 Aug 2024) Machine learning supported annealing for prediction of grand canonical crystal structures
  • (Matsumori et al., 2022) Application of QUBO solver using black-box optimization to structural design for resonance avoidance
  • (Wilson et al., 2021) Machine Learning Framework for Quantum Sampling of Highly-Constrained, Continuous Optimization Problems

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Factorization-Machine–Assisted QUBO Reductions (FMQA).