Krotov’s Method for Quantum Control
- Krotov’s Method is an optimal control algorithm for quantum systems that guarantees monotonic convergence using sequential updates with tailored control fields.
- It employs iterative forward and backward propagations to locally update control parameters while effectively handling constraints and nonlinear dynamics.
- The method is widely applied in state transfer, quantum gate implementation, and robust pulse design, demonstrating superior convergence and efficiency.
Krotov’s method is a class of sequential update algorithms for optimal control of dynamical systems, most prominently quantum systems. Originating in deterministic control theory and adapted for quantum control in the late 20th century, Krotov's method is distinguished by its guarantee of monotonic convergence, adaptability to various constraints, and capacity for large update steps without recourse to explicit line search. Its primary application domain is the design of external fields that steer a quantum system's evolution towards a specified target observable, state, or unitary transformation.
1. Mathematical Formulation and Problem Class
Krotov’s method addresses the optimal control problem for dynamical systems described by a differential equation of the form
where the Hamiltonian depends on a set of control fields , often entering linearly or bilinearly: The goal is to maximize a functional (the “fidelity”), such as:
- State transfer: ,
- Observable maximization: ,
- Gate fidelity: with optional running‐cost penalization
although for realistic discretizations this is usually unnecessary (Schirmer et al., 2011, Morzhin et al., 2018).
The general structure of the cost functional is
with the terminal cost encoding state or gate objectives, penalizing fluence, and state or trajectory constraints (Morzhin et al., 2018).
2. Algorithmic Structure and Update Rule
Krotov’s method employs an iterative, sequential update of the controls. Each iteration consists of
- Propagating the current system state forward with the current control.
- Propagating a co-state (adjoint variable or Lagrange multiplier) backward from the target at .
- Updating the control field locally in time to guarantee increase (or, for cost minimization, decrease) of the functional.
In the first-order formulation (linear improvement functional), the update at each time is given (assuming quadratic control cost) by: where is the updated forward state, is the control Hamiltonian component, and is a shape function supporting boundary constraints (Morzhin et al., 2018, Reich et al., 2010).
A key algorithmic feature is the use of the sequential update: each time slice is updated and the new state is immediately used for the next increment, rather than updating all controls in parallel as in, e.g., GRAPE (Schirmer et al., 2011).
Second-order extensions (linear-quadratic improvement) introduce an additional term, parameterized by a negative-definite matrix or scalar , allowing monotonic convergence when the cost is non-convex, the Hamiltonian is nonlinear in the controls or state, or the functional is higher than quadratic in the state (Reich et al., 2010, Morzhin et al., 2018).
3. Monotonicity, Regularization, and Constraint Handling
Monotonic convergence (guaranteed improvement per iteration) is established by construction: the updated control at each time locally maximizes the Pontryagin function (Hamiltonian) evaluated with the new state and previous adjoint variable. For functionals corresponding to convex objectives, linear system dynamics, and linear dependence on controls, monotonicity requires only first-order updates. In the presence of non-convexity, nonlinear equations of motion, or higher-order state dependence, an additional second-order term parameterized by is needed, adjustable by an explicit analytic formula derived from Taylor expansions and bounds on the system derivatives (Reich et al., 2010, Morzhin et al., 2018).
Penalty (regularization) terms on the controls may be required in the continuum limit to ensure problem well-posedness, but for time-discretized controls at moderate resolutions, the functional itself provides sufficient quadratic dependence on each control variable, and penalty terms can degrade final solutions by distorting first-order optimality conditions. Extensive numerical evidence demonstrates monotonicity without explicit penalties in discrete settings (Schirmer et al., 2011).
Additional constraints:
- Spectral constraints are incorporated via non-local quadratic terms in the control; their introduction leads to a Fredholm equation for the control update, solvable by kernel methods or Fourier techniques.
- State-space constraints (e.g., forbidden projections or subspace avoidance) are encoded as negative-definite penalty terms in the state, entering the adjoint equation and preserving monotonicity provided the sign is chosen appropriately (Morzhin et al., 2018).
4. Discretization Effects and Practical Algorithm Implementation
Time discretization directly affects both convergence properties and computational tractability:
- For moderate time slices , the quadratic approximation of the objective suffices for monotonicity, allowing omission of penalty terms (Schirmer et al., 2011).
- Accurate evaluation of the gradient of the functional with respect to each control pixel is crucial; for large , naive approximations can be substantially misaligned (e.g., only 40% overlap for ), while exact integrals or high-order expansions recover accuracy.
- Several control update strategies are used:
- Greedy optimal step sizes per time slice, obtained from local quadratic fits to the functional.
- Overshoot and “bounded-drift” strategies for the step size mitigate the tendency of greedy updates to flatten gradients and slow long-run convergence (Schirmer et al., 2011).
A typical forward-only, piecewise-constant-control Krotov algorithm performs per-iteration work dominated by two propagations (forward and backward) and gradient evaluations, with a total scaling of for dense Hamiltonians (Schirmer et al., 2011).
5. Comparison with Other Quantum Control Algorithms
Krotov’s method is contrasted with alternative optimal control schemes:
| Method | Monotonicity | Update Style | Special Features |
|---|---|---|---|
| Krotov | Guaranteed | Sequential | Large steps, explicit constraints, no line search |
| GRAPE (Gradient Ascent Pulse Eng.) | Not general | Parallel/global | Fast near optimum, mature BFGS, needs line search |
| CRAB | No | Parametric/random | Small dimension, derivative-free, basis-dependent |
| Zhu–Rabitz | Yes | Explicit global | Simplified for linear/cost-quadratic cases |
| Maday–Turinici | Yes | Parametric blend | Interpolates Krotov/Zhu–Rabitz by tuning parameters |
Krotov’s sequential update structure enables rigorous monotonicity, effective constraint imposition, and efficient convergence for large changes in the control fields, whereas GRAPE relies on global gradient and can be stalled by strong constraints or non-convexity (Morzhin et al., 2018, Goerz et al., 2019, Schirmer et al., 2011).
6. Applications and Numerical Performance
Krotov’s method has been applied to an array of quantum control tasks, including:
- State-to-state transfer of pure or mixed states.
- Quantum gate implementation for single- and multi-qubit operations (Hadamard, CNOT, QFT, etc.).
- Dissociation control in molecular dynamics, with demonstrated orders-of-magnitude faster convergence than steepest descent (e.g., >99% molecular dissociation in ~10 iterations vs hundreds for gradient methods) (Morzhin et al., 2018).
- Control of nonlinear Gross–Pitaevskii dynamics (e.g., Bose–Einstein condensate manipulation), showing favorable scaling over GRAPE in regimes with strong nonlinearity (Morzhin et al., 2018).
- Robust and noise-resistant pulse design, ensemble and open-system controls (Goerz et al., 2019).
Implemented toolchains such as the “krotov” Python package integrate with quantum simulation libraries (e.g., QuTiP), facilitate a variety of control landscapes, and enable direct application of both first- and second-order schemes, robust to non-convex and constrained cases (Goerz et al., 2019).
7. Convergence Analysis, Limitations, and Extensions
Formal convergence guarantees state that, under mild smoothness and boundedness, the sequence of functionals generated by Krotov’s method is monotonic and bounded, converging either to a critical point or diverging to unbounded controls (which is not observed in practical quantum scenarios) (Schirmer et al., 2011). Asymptotic convergence rates depend on step-size regulation: greedy maximization can surprisingly result in slow linear rates, while a slight under- or overshoot improves long-run convergence (Schirmer et al., 2011).
Second-order Krotov is essential for: nonlinear equations of motion (e.g., mean-field Gross–Pitaevskii), non-hermitian or non-unitary dynamics (open systems), time-dependent or non-convex targets, and high-order polynomial functionals. An explicit prescription is provided for the required scalar weight to guarantee monotonicity (Reich et al., 2010).
Although time-continuous monotonicity is rigorously ensured, for highly discontinuous or piecewise-constant control ansätze (as in GRAPE), Krotov’s sequential update may be less efficient or require careful parameter tuning (Schirmer et al., 2011, Goerz et al., 2019). A range of practical enhancements—trust-region regularization, frequency filtering, and hybridization with global optimizers—extends the method to broad scenarios.
Key References:
- Schirmer & de Fouquières, "Efficient Algorithms for Optimal Control of Quantum Dynamics: the ‘Krotov’ Method Unencumbered" (Schirmer et al., 2011)
- Morzhin & Pechen, "Krotov Method for Optimal Control in Closed Quantum Systems" (Morzhin et al., 2018)
- Reich, Ndong & Koch, "Monotonically convergent optimization in quantum control using Krotov's method" (Reich et al., 2010)
- Goerz et al., "Krotov: A Python implementation of Krotov's method for quantum optimal control" (Goerz et al., 2019)