Memory-efficient reverse-mode AD for long-horizon agent-based models

Develop memory-efficient reverse-mode automatic differentiation techniques for agent-based models with extended temporal horizons that avoid storing the entire computational graph during simulation while preserving correct gradient computation for parameter learning and sensitivity analysis.

Background

Reverse-mode automatic differentiation scales favorably with the number of outputs but requires storing all intermediate values from the forward pass to perform backpropagation. In agent-based models, which often simulate large populations over long time horizons, this storage requirement becomes prohibitive, leading to excessive memory usage.

The paper proposes a hybrid strategy that uses forward-mode AD through the ABM and reverse-mode AD through the normalizing flow, thereby sidestepping reverse-mode memory costs during ABM simulation. Nevertheless, a general solution for memory-efficient reverse-mode differentiation directly through long-horizon ABMs remains unresolved and would enable broader use of reverse-mode AD in these models.

References

Additionally, developing memory-efficient reverse mode differentiation for ABMs with extended temporal horizons remains an open challenge.

Automatic Differentiation of Agent-Based Models  (2509.03303 - Quera-Bofarull et al., 3 Sep 2025) in Subsection "Future work", Section "Discussion"