Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modeling Extremes with d-max-decreasing Neural Networks (2102.09042v2)

Published 17 Feb 2021 in stat.ML, cs.LG, and stat.CO

Abstract: We propose a novel neural network architecture that enables non-parametric calibration and generation of multivariate extreme value distributions (MEVs). MEVs arise from Extreme Value Theory (EVT) as the necessary class of models when extrapolating a distributional fit over large spatial and temporal scales based on data observed in intermediate scales. In turn, EVT dictates that $d$-max-decreasing, a stronger form of convexity, is an essential shape constraint in the characterization of MEVs. As far as we know, our proposed architecture provides the first class of non-parametric estimators for MEVs that preserve these essential shape constraints. We show that our architecture approximates the dependence structure encoded by MEVs at parametric rate. Moreover, we present a new method for sampling high-dimensional MEVs using a generative model. We demonstrate our methodology on a wide range of experimental settings, ranging from environmental sciences to financial mathematics and verify that the structural properties of MEVs are retained compared to existing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ali Hasan (19 papers)
  2. Khalil Elkhalil (11 papers)
  3. Yuting Ng (10 papers)
  4. Sina Farsiu (18 papers)
  5. Jose H. Blanchet (8 papers)
  6. Vahid Tarokh (144 papers)
  7. Joao M. Pereira (1 paper)
Citations (5)

Summary

We haven't generated a summary for this paper yet.