Asymptotics of constrained $M$-estimation under convexity (2511.04612v1)
Abstract: M-estimation, aka empirical risk minimization, is at the heart of statistics and machine learning: Classification, regression, location estimation, etc. Asymptotic theory is well understood when the loss satisfies some smoothness assumptions and its derivatives are dominated locally. However, these conditions are typically technical and can be too restrictive or heavy to check. Here, we consider the case of a convex loss function, which may not even be differentiable: We establish an asymptotic theory for M-estimation with convex loss (which needs not be differentiable) under convex constraints. We show that the asymptotic distributions of the corresponding M-estimators depend on an interplay between the loss function and the boundary structure of the set of constraints. We extend our results to U-estimators, building on the asymptotic theory of U-statistics. Applications of our work include, among other, robust location/scatter estimation, estimation of deepest points relative to depth functions such as Oja's depth, etc.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.