Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calibrating Over-Parametrized Simulation Models: A Framework via Eligibility Set (2105.12893v1)

Published 27 May 2021 in stat.ME, cs.CE, and cs.LG

Abstract: Stochastic simulation aims to compute output performance for complex models that lack analytical tractability. To ensure accurate prediction, the model needs to be calibrated and validated against real data. Conventional methods approach these tasks by assessing the model-data match via simple hypothesis tests or distance minimization in an ad hoc fashion, but they can encounter challenges arising from non-identifiability and high dimensionality. In this paper, we investigate a framework to develop calibration schemes that satisfy rigorous frequentist statistical guarantees, via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator (ABIDES).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuanlu Bai (10 papers)
  2. Tucker Balch (61 papers)
  3. Haoxian Chen (15 papers)
  4. Danial Dervovic (24 papers)
  5. Henry Lam (91 papers)
  6. Svitlana Vyetrenko (39 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.