Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with $β$-Divergences (1806.02261v2)

Published 6 Jun 2018 in stat.ML and cs.LG

Abstract: We present the very first robust Bayesian Online Changepoint Detection algorithm through General Bayesian Inference (GBI) with $\beta$-divergences. The resulting inference procedure is doubly robust for both the parameter and the changepoint (CP) posterior, with linear time and constant space complexity. We provide a construction for exponential models and demonstrate it on the Bayesian Linear Regression model. In so doing, we make two additional contributions: Firstly, we make GBI scalable using Structural Variational approximations that are exact as $\beta \to 0$. Secondly, we give a principled way of choosing the divergence parameter $\beta$ by minimizing expected predictive loss on-line. Reducing False Discovery Rates of CPs from more than 90% to 0% on real world data, this offers the state of the art.

Citations (54)

Summary

Overview of Doubly Robust Bayesian Inference for Non-Stationary Streaming Data with β\beta-Divergences

This paper introduces an innovative approach to changepoint detection in non-stationary streaming data through a robust Bayesian framework using β\beta-divergences. The proposed method addresses the high false discovery rates common in standard Bayesian On-line Changepoint Detection (BOCPD), particularly in the presence of outliers, by integrating robustness directly into the inference process. The core contributions of this paper are the application of General Bayesian Inference (GBI) with β\beta-divergences, the development of structural variational approximations for scalable computation, and a principled approach to optimizing the divergence parameter β\beta.

Robustness in Bayesian Inference

Traditional Bayesian inference often relies on minimizing the Kullback-Leibler divergence between a probabilistic model and the hypothetical data-generating process. While effective in the M-closed world, it fails to cope with outliers or model misspecification due to its strictly increasing influence function. The paper proposes using β\beta-divergence in GBI, which introduces a unique maximum in the influence function, allowing for effective handling of outliers. The influence of observations increases initially but decreases sharply once they deviate significantly from the posterior mean, treating them as outliers. The robustness parameter β\beta regulates the degree of robustness, ensuring a single outlier does not lead to false changepoint declarations.

Scalability and Computational Efficiency

One of the key challenges of robust Bayesian inference, particularly in streaming data contexts, is computational scalability. The paper addresses this by introducing a structured variational approximation that preserves parameter dependence and mirrors the conjugate distribution of β0\beta\to 0. This approach allows for efficient computation, overcoming the historical bottleneck associated with GBI's intractable posteriors. The structural variational inference method provides a nearly exact fit, and Theorem 2 confirms that the approximation reduces to a solvable form for many exponential family models.

Optimizing the Divergence Parameter β\beta

Choosing an appropriate value for β\beta is crucial for effective robust inference. The paper offers a systematic method to initialize β\beta by predicting the expected influence distribution, with further refinement achieved via minimizing predictive losses during model execution. This dynamic optimization ensures that the robustness adapts to changes in data characteristics, enabling more accurate model fitting.

Experimental Validation and Practical Implications

The paper validates its approach through simulations and real-world datasets, including the well-log data—a benchmark for changepoint detection—and a high-dimensional analysis of air pollution levels in London. The robust method reduces false discovery rates significantly and offers probabilistic forecasts, making it applicable across diverse domains such as genetics, finance, and cybersecurity. By integrating robustness into parameter and run-length posteriors, the approach provides more reliable inference and uncertainty quantification.

Future Developments

This paper opens avenues for applying robust Bayesian inference to a broader range of models beyond the standard changepoint detection framework. The integration of β\beta-divergences into GBI can be extended to other settings, fostering improved handling of data heterogeneity and outliers. The established computational efficiency and scalability provide a foundation for exploring robust inference in large-scale and high-dimensional data environments, which increasingly characterize modern machine learning tasks.

In summary, this paper represents a substantial advancement in the application and scalability of robust Bayesian inference mechanisms, promoting their use in dynamic and non-stationary data settings. The presented innovations in computational methods and robustness parameter optimization set the stage for broader applicability and refinement in future research endeavors.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube