Dice Question Streamline Icon: https://streamlinehq.com

Calibrate the Nash-DQN GHG offset credit market model to real data

Calibrate and tune the finite-agent greenhouse gas offset credit (OC) market model, estimated via the Nash-DQN reinforcement learning approach, to real-world data from the Canadian federal OC market once such data becomes available, in order to parameterize the model and empirically validate agent behaviors and market dynamics.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper develops a finite-agent market model for greenhouse gas offset credits (OCs) and estimates its Nash equilibrium using the Nash-DQN algorithm. Due to the newness of the Canadian federal OC market and lack of publicly available verified project data, the model was not calibrated to real observations.

The authors explicitly identify calibration to real data as an open problem tied to data availability and proprietary firm information. Addressing this would allow empirical validation and parameter tuning of the learned equilibria and could inform regulatory and firm decision-making.

References

Both climate finance and RL (and more generally machine learning) are flourishing areas of research, hence there are many open problems that intersect the two. Within the current framework, there remain open problems that are worthwhile investigating. First, this paper's goal was to illustrate the viability of deploying Nash-DQN in this offset credit marketing setting, and we did not calibrate our model to real data. Future work may can bridge this gap by calibrating and tuning our model to real data, once it is available.

Multi-Agent Reinforcement Learning for Greenhouse Gas Offset Credit Markets (2504.11258 - Welsh et al., 15 Apr 2025) in Section 6 (Conclusion)