Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models for Statistical Inference: Context Augmentation with Applications to the Two-Sample Problem and Regression (2506.23862v1)

Published 30 Jun 2025 in stat.ME

Abstract: We introduce context augmentation, a data-augmentation approach that uses LLMs to generate contexts around observed strings as a means of facilitating valid frequentist inference. These generated contexts serve to reintroduce uncertainty, incorporate auxiliary information, and facilitate interpretability. For example, in the two-sample test, we compare the log-probability of strings under contexts from its own versus the other group. We show on synthetic data that the method's t-statistics exhibit the expected null behaviour while maintaining power and, through a replication, that the method is powerful and interpretable. We next introduce text-on-text regression. Contexts generated around the predictor string are treated as mediating variables between the predictor and outcome strings. Using negative controls, we then distinguish between semantic and syntactic dimensions of prediction. Analysis of real-world dialogic data illustrates behaviour predicted from a psycholinguistic framework. Theoretically, we provide identification conditions, derive an influence-function decomposition, and show that repeated cross-fitting of a pivotal statistic yields higher-order efficiency. We derive bounds linking estimation error, context count, and number of cross-fits. Taken together, context augmentation offers the ability to connect LLMs with longstanding statistical practice.

Summary

We haven't generated a summary for this paper yet.