FALCON: Few-step Likelihoods in Continuous Flows
- The paper introduces a hybrid few-step flow map and training objective that accurately computes likelihoods with minimal ODE steps.
- It achieves up to two orders of magnitude speedup over traditional CNFs while preserving the exact change-of-variables formulation.
- Empirical evaluations on molecular systems confirm that FALCON maintains high sample quality and efficiency compared to existing methods.
Few-step Accurate Likelihoods for Continuous Flows (FALCON) is a methodology designed to address the computational inefficiencies inherent to likelihood evaluation and sampling in continuous normalizing flows (CNFs), particularly in the context of molecular Boltzmann sampling. FALCON introduces a hybrid training objective and a few-step flow map, enabling accurate likelihood computation with a minimal number of network evaluations, while maintaining the invertibility required for exact change-of-variables formulations. This approach yields substantial acceleration—up to two orders of magnitude in inference speed—over prior CNF architectures without sacrificing empirical sample quality or likelihood accuracy (Rehman et al., 10 Dec 2025).
1. Continuous Normalizing Flows and Exact Likelihoods
Continuous normalizing flows (CNFs) model transformations between probability distributions via the solution of an ordinary differential equation (ODE) parameterized by a neural vector field . Given an initial condition , the ODE
drives from the base distribution to a target distribution . The evolution of the log-density follows
integrating to provide the exact likelihood: In practice, evaluating the likelihood requires discretizing the ODE and estimating the Jacobian trace at high accuracy. State-of-the-art molecular Boltzmann generators demand tight integration tolerances (e.g., ), yielding hundreds to thousands of ODE steps and correspondingly expensive network function evaluations— per sample (Rehman et al., 10 Dec 2025).
2. Flow-Matching Training and Inference Bottlenecks
The standard flow-matching objective, as formalized by [Lipman et al. 2022], leverages a sampling procedure where , , and to create linear interpolants . The model vector field is trained by minimizing
Although this objective sidesteps maximum-likelihood estimation during training, inference remains bottlenecked by the requirement of fine-grained ODE integration to evaluate likelihoods and log-density corrections, since the standard likelihood computation is still path-dependent.
3. FALCON: Hybrid Few-Step Flow Map and Training Objective
FALCON introduces a discrete-time, few-step "flow map" : where is trained to approximate the integrated vector field of the underlying ODE over . This construction is accompanied by a hybrid loss: with
- : Standard flow-matching regression.
- : Average-velocity matching to enforce as an accurate mean-flow (MeanFlow-style, [Geng et al. 2025]), relating to the true time-averaged ODE velocity.
- : Cycle-consistency regularizer promoting invertibility by minimizing the expectation
Hyperparameters and control the balance between generation accuracy and invertibility.
4. Few-Step Sampling and Likelihood Computation
Sampling and likelihood calculation within FALCON proceeds as a sequence of updates along a user-defined schedule , typically with –$16$. At each step:
- The particle state is updated via the flow map:
- The log-density is tracked by
This procedure is in cost for network evaluations, with as small as 4 yielding accurate enough likelihoods for self-normalized importance sampling (SNIS). Unlike CNFs, the expensive continuous trajectory integration and large numbers of function evaluations are circumvented (Rehman et al., 10 Dec 2025).
| Step | Operation | Notes |
|---|---|---|
| (i) | , | Initial sample |
| (ii) | , | Iterative flow update and log-det correction |
| (iii) | Output | Final sample and likelihood |
5. Theoretical Guarantees
FALCON is accompanied by two central theoretical propositions [(Rehman et al., 10 Dec 2025), Appendix A.1–A.2]:
- Proposition 1 (Average-Velocity Optimality): If perfectly matches the mean-velocity loss , replicates the exact time- ODE flow map and is globally invertible. The discrete change-of-variables formula (log-density update) then holds exactly.
- Proposition 2 (Invertibility Regularizer): Minimizing alone is sufficient to guarantee that is invertible almost everywhere, ensuring the validity of the log-determinant correction in discrete likelihood calculations.
No explicit analytic error bounds as a function of the number of steps are given, but empirical results indicate that increasing rapidly reduces discretization error, and good accuracy is achieved with small .
6. Empirical Evaluation on Molecular Boltzmann Sampling
FALCON is evaluated on a suite of molecular systems under implicit solvent Amber force fields:
- Alanine dipeptide (ALDP)
- Tri-alanine (AL3)
- Alanine tetrapeptide (AL4)
- Hexa-alanine (AL6)
Baselines include discrete normalizing flows (SE(3)-EACF, RegFlow, SBG) and continuous flows (ECNF, ECNF++, BoltzNCE). Performance metrics are:
- Effective Sample Size (ESS)
- $2$-Wasserstein distance on energy histograms (E–)
- Torus $2$-Wasserstein on dihedral angles (T–)
- Wall-clock inference time and network function evaluations (NFE)
Results demonstrate that, for ALDP, FALCON achieves an ESS of $0.225$ (comparable to ECNF++'s $0.275$), with improved distances (FALCON: $0.402$ vs.\ ECNF++: $0.914$; SBG: $0.873$). For larger systems (AL3/AL4/AL6), FALCON achieves ESS up to versus ECNF++ at , with lower Wasserstein distances. Inference time is improved by , with FALCON requiring only $4$–$16$ steps as opposed to $200$–$300$ for Dormand–Prince CNFs (Rehman et al., 10 Dec 2025).
7. Practical Limitations and Future Prospects
FALCON's discretization error is empirically calibrated; formal coverage of error versus step count is not provided. The approach does not yet achieve the one-step limit, with best results at –$8$. While invertibility is empirically satisfied (empirical reconstruction error ), it is not a strict constraint during training and relies on convergence of the cycle-consistency loss. Future research directions highlighted include:
- Structured-Jacobian architectures to further reduce the cost of Jacobian determinant evaluation,
- Application to Bayesian inference, robotics, and complex posteriors,
- Theoretical quantification of few-step discretization error.
FALCON unifies simulation-free flow matching with a fast, invertible few-step mapping, providing efficient and accurate likelihoods for importance sampling and likelihood-based downstream tasks in domains where CNF inference costs were previously prohibitive (Rehman et al., 10 Dec 2025).