Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RealCause: Realistic Causal Inference Benchmarking (2011.15007v2)

Published 30 Nov 2020 in cs.LG, cs.AI, and stat.ML

Abstract: There are many different causal effect estimators in causal inference. However, it is unclear how to choose between these estimators because there is no ground-truth for causal effects. A commonly used option is to simulate synthetic data, where the ground-truth is known. However, the best causal estimators on synthetic data are unlikely to be the best causal estimators on real data. An ideal benchmark for causal estimators would both (a) yield ground-truth values of the causal effects and (b) be representative of real data. Using flexible generative models, we provide a benchmark that both yields ground-truth and is realistic. Using this benchmark, we evaluate over 1500 different causal estimators and provide evidence that it is rational to choose hyperparameters for causal estimators using predictive metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Brady Neal (6 papers)
  2. Chin-Wei Huang (24 papers)
  3. Sunand Raghupathi (4 papers)
Citations (31)

Summary

We haven't generated a summary for this paper yet.