Factorization-Machine QUBO Reductions
- FMQA is a computational framework that maps factorization machine surrogates to QUBO formulations, enabling efficient optimization via quantum and classical annealing.
- The methodology integrates precise algebraic QUBO mapping, systematic constraint handling, and iterative surrogate refinement to accelerate black-box combinatorial optimization.
- Empirical results in materials science and engineering demonstrate FMQA’s scalability, sample efficiency, and reduced computational overhead compared to classical approaches.
Factorization-Machine–Assisted QUBO Reductions (FMQA) is a computational paradigm that integrates factorization machine surrogates with quantum or classical Ising machines to accelerate black-box combinatorial optimization. FMQA enables the translation of high-dimensional, expensive objective functions—commonly arising in materials design, chemical discovery, and engineering topology optimization—into efficient Quadratic Unconstrained Binary Optimization (QUBO) problems that are tractable for annealing-based hardware and large-scale solvers. The method’s core lies in the precise algebraic mapping of factorization machine surrogates into QUBO form, systematic handling of constraints, and iterative surrogate refinement through black-box sampling and retraining.
1. Mathematical Foundation: FM Surrogates and QUBO Mapping
The FMQA framework relies on training a second-order factorization machine model on a set of binary-encoded candidate solutions with their corresponding black-box objective values. Given binary variables , the FM is defined as
where is a global bias, are linear coefficients, and are latent factor vectors for each variable. The pairwise interaction term captures second-order feature dependencies via a low-rank approximation, enabling expressivity for modeling combinatorial landscapes using parameters.
The surrogate prediction is directly mapped into QUBO form by setting
This correspondence ensures exact translation of the FM hypothesis into a binary quadratic objective, preserving both linear and quadratic interactions. Constraints such as one-hot, cardinality, or higher-order conditions are integrated via additional penalty terms (e.g., ), which are absorbed into diagonal and off-diagonal elements of (Tamura et al., 24 Jul 2025, Wilson et al., 2021, Matsumori et al., 2022, Couzinie et al., 7 Aug 2024).
2. FMQA Optimization Workflow
The FMQA method consists of a sequential surrogate-guided optimization loop:
- Data Acquisition: Begin with an initial dataset , where from black-box evaluations.
- FM Surrogate Training: Fit the FM by minimizing , optionally with regularization.
- QUBO Construction: Extract and as above; add constraint penalties if required.
- Annealing Query: Minimize using a QUBO solver—quantum annealer, digital annealer, or simulated annealing device—to obtain promising candidates .
- Evaluation and Augmentation: Evaluate , add to .
- Iteration: Repeat steps 2–5 for a fixed number of rounds or until convergence criteria such as stagnation or objective improvement threshold are met.
A prototypical pseudocode for the black-box optimization setting is:
1 2 3 4 5 6 7 8 9 |
initialize dataset D = { (x^(m), y^(m)) for m in 1...M0 }
for t in 1...T:
fit FM surrogate (w0, w, V) on D
build QUBO: Q_ii = w_i, Q_ij = dot(v_i, v_j)
x_new = QUBO_solver(Q)
if x_new already in D: randomize or re-sample
y_new = f(x_new)
D ← D ∪ { (x_new, y_new) }
return best x in D |
This iterative loop provides a surrogate-accelerated alternative to classical, sample-inefficient approaches, using the annealer as a dedicated optimizer for the surrogate’s quadratic surface.
3. Constraint Handling and Penalty Formulation
FMQA provides a direct algebraic recipe for encoding a variety of combinatorial and structural constraints into the QUBO matrix. Crucial patterns include:
- Fixed-Cardinality (choose-K):
Expands to linear and quadratic penalties absorbed into and , strongly discouraging solutions outside cardinality (Matsumori et al., 2022, Tamura et al., 24 Jul 2025).
- One-Hot/Mutual Exclusion:
Ensures “exactly one” selection within set by penalizing any non-compliant binary patterns (Endo et al., 5 Jul 2024).
- Structural/Domain-Specific: Additional graph constraints, adjacency terms, or side information may be quadratized and incorporated in the same manner, enabling FMQA to adapt to domain-specific restrictions without introducing auxiliary variables or exponential constraint-logic terms.
Penalty strengths are chosen large compared to the typical absolute value of entries to ensure feasibility, particularly when using hardware annealers with limited precision (Endo et al., 5 Jul 2024, Tamura et al., 24 Jul 2025).
4. FM Initialization, Function Smoothing, and Numerical Stability
Proper initialization of FM parameters critically affects optimization convergence, especially when using warm-starts or when the initial quadratic surrogate should closely match a known approximate Hamiltonian. Low-rank initialization via eigen-decomposition of a reference interaction matrix and projection onto a -rank subspace provides a theoretically justified and empirically robust starting point. This procedure minimizes the Frobenius norm error between the true and FM-implied coupling matrices and was systematically analyzed using random matrix theory to predict effective rank for target tolerances (Seki et al., 16 Oct 2024). Warm-starts reduce the number of sample–train–sample cycles required to reach optimality.
Function surface “noise”—random parameter drift in regions never sampled—can lead to pathologically rough QUBO landscapes and degraded annealing performance, particularly with high-dimensional discrete binary encodings of continuous variables. Graph-Laplacian function smoothing regularization,
where denotes pairs of adjacent bits in encoding blocks, is added to the FM loss to enforce local smoothness and propagate informative gradients to all model parameters. This regularization restores smooth descent and substantially accelerates convergence, effectively halving the number of expensive black-box evaluations required to reach near-optimal solutions in practical test cases (Endo et al., 5 Jul 2024).
5. Computational Complexity and Hardware Scalability
FM training scales as per epoch, where is the sample count, is the number of binary variables, and is the FM rank. Quadratic matrix formation for QUBO mapping requires operations. Hardware QUBO solvers (quantum annealers, digital annealers) provide (nearly) constant wall-clock solution times for fixed , while simulated annealing and classical heuristics scale mildly with . This yields an end-to-end pipeline in which the computational bottleneck is exported from the CPU (classical surrogate search) to specialized hardware. Compared to Bayesian optimization with Gaussian processes ( training, NP-hard acquisition maximization), FMQA achieves both linear-complexity retraining and hardware acceleration for QUBO minimization (Tamura et al., 24 Jul 2025, Wilson et al., 2021).
Tables summarize complexity scaling:
| Step | Complexity |
|---|---|
| FM Training | |
| QUBO Matrix Assembly | |
| Annealer QUBO Solve | (annealer), (sim. anneal) |
Embedding limitations on current quantum hardware (e.g., D-Wave’s 180-bit Pegasus clique) cap the practical ; hybrid and decomposition methods extend applicability to larger problems (Wilson et al., 2021).
6. Empirical Performance and Application Domains
FMQA demonstrates empirically robust sample efficiency and solution quality across diverse application domains:
- Binary combinatorial optimization: FMQA consistently achieves 3–10× reduction in black-box calls compared to random or genetic algorithms for physics-inspired design tasks, e.g., radiative cooling metamaterials (24–48 bits), and vehicle body multiobjective optimization (up to bits) (Tamura et al., 24 Jul 2025).
- Constrained integer optimization: FMQA effectively encodes both variable cardinality and one-hot constraints for materials structure search, laser design, and traffic signaling, demonstrating accelerated convergence over evolutionary and classical surrogate approaches (Tamura et al., 24 Jul 2025, Matsumori et al., 2022).
- Continuous/Latent space design: FMQA integrates with binary VAEs to compress images, molecules, or strings into binary latent vectors, dramatically reducing search complexity for topology, chemical, and high-dimensional graph design (Wilson et al., 2021).
- Materials science and structure prediction: FMQA enables efficient crystal structure prediction, with typical ground-state sampling requiring only $7$–$18$ queries per run versus $100$–$1000$ for direct (vanilla) annealing (Couzinie et al., 7 Aug 2024). FMQA recovers local-minimum ordinal relationships with high fidelity (Kendall’s approaching unity for simple potentials).
Empirical results highlight FMQA’s scalability and sample efficiency, with quantum annealers or hybrid samplers providing superior wall-clock times when QUBO sizes match hardware embed-ability.
7. Theoretical Analysis, Implications, and Extensions
Random matrix theory enables explicit prediction of effective FM rank requirements for a desired approximation error, facilitating principled choice of FM complexity and providing guidelines for initializing surrogates close to the true Hamiltonian (Seki et al., 16 Oct 2024). Smoothing regularization and systematic penalty scaling mitigate pathological QUBO landscapes and address the curse of dimensionality in binary encoding schemes (Endo et al., 5 Jul 2024).
A plausible implication is that as Ising hardware and quantum annealers grow in connectivity and precision, FMQA will provide a general-purpose, high-throughput strategy for black-box combinatorial and mixed-integer optimization, reducing the role of expensive hand-engineered heuristics and enabling integration of complex domain-specific constraints via straightforward quadratic penalty embedding.
References
- (Tamura et al., 24 Jul 2025) Black-box optimization using factorization and Ising machines
- (Seki et al., 16 Oct 2024) Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
- (Endo et al., 5 Jul 2024) Function Smoothing Regularization for Precision Factorization Machine Annealing in Continuous Variable Optimization Problems
- (Couzinie et al., 7 Aug 2024) Machine learning supported annealing for prediction of grand canonical crystal structures
- (Matsumori et al., 2022) Application of QUBO solver using black-box optimization to structural design for resonance avoidance
- (Wilson et al., 2021) Machine Learning Framework for Quantum Sampling of Highly-Constrained, Continuous Optimization Problems