Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Inference for Change Points in High Dimensional Data via Self-Normalization (1905.08446v2)

Published 21 May 2019 in math.ST, stat.ME, and stat.TH

Abstract: This article considers change point testing and estimation for a sequence of high-dimensional data. In the case of testing for a mean shift for high-dimensional independent data, we propose a new test which is based on $U$-statistic in Chen and Qin (2010) and utilizes the self-normalization principle [Shao (2010), Shao and Zhang (2010)]. Our test targets dense alternatives in the high-dimensional setting and involves no tuning parameters. To extend to change point testing for high-dimensional time series, we introduce a trimming parameter and formulate a self-normalized test statistic with trimming to accommodate the weak temporal dependence. On the theory front, we derive the limiting distributions of self-normalized test statistics under both the null and alternatives for both independent and dependent high-dimensional data. At the core of our asymptotic theory, we obtain weak convergence of a sequential U-statistic based process for high-dimensional independent data, and weak convergence of sequential trimmed U-statistic based processes for high-dimensional linear processes, both of which are of independent interests. Additionally, we illustrate how our tests can be used in combination with wild binary segmentation to estimate the number and location of multiple change points. Numerical simulations demonstrate the competitiveness of our proposed testing and estimation procedures in comparison with several existing methods in the literature.

Summary

We haven't generated a summary for this paper yet.