Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
140 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Non-Parametric Box-Cox Approach to Robustifying High-Dimensional Linear Hypothesis Testing (2405.12816v1)

Published 21 May 2024 in stat.ME

Abstract: The mainstream theory of hypothesis testing in high-dimensional regression typically assumes the underlying true model is a low-dimensional linear regression model, yet the Box-Cox transformation is a regression technique commonly used to mitigate anomalies like non-additivity and heteroscedasticity. This paper introduces a more flexible framework, the non-parametric Box-Cox model with unspecified transformation, to address model mis-specification in high-dimensional linear hypothesis testing while preserving the interpretation of regression coefficients. Model estimation and computation in high dimensions poses challenges beyond traditional sparse penalization methods. We propose the constrained partial penalized composite probit regression method for sparse estimation and investigate its statistical properties. Additionally, we present a computationally efficient algorithm using augmented Lagrangian and coordinate majorization descent for solving regularization problems with folded concave penalization and linear constraints. For testing linear hypotheses, we propose the partial penalized composite likelihood ratio test, score test and Wald test, and show that their limiting distributions under null and local alternatives follow generalized chi-squared distributions with the same degrees of freedom and noncentral parameter. Extensive simulation studies are conducted to examine the finite sample performance of the proposed tests. Our analysis of supermarket data illustrates potential discrepancies between our testing procedures and standard high-dimensional methods, highlighting the importance of our robustified approach.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com