Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 74 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 438 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

High-dimensional Linear Contextual Bandits

Updated 11 October 2025
  • High-dimensional linear contextual bandits are sequential decision-making problems where rewards rely on unknown linear models in high-dimensional feature spaces, challenging classical methods.
  • The adaptive pointwise estimator (PWE) integrates parameter and spectral sparsity to accurately predict rewards and overcome the curse of dimensionality.
  • The HOPE algorithm employs an explore-then-commit framework with PWE to achieve robust regret guarantees across homogeneous, mixed, and heterogeneous settings.

High-dimensional linear contextual bandit problems refer to sequential decision-making settings where, in each round, a learner observes high-dimensional context vectors for a set of actions (“arms”) and must select one to maximize the cumulative reward, with the expected reward determined by an unknown linear model of the context. The high-dimensional regime—where the number of features rivals or exceeds the number of interactions—fundamentally challenges classical exploration–exploitation strategies, as standard estimation and regret bounds become impractical without further structure or algorithmic adaptation.

1. Statistical and Structural Challenges in High Dimensions

The high-dimensional linear contextual bandit setting is characterized by the context dimensionality pp being of the same order or greater than the number of rounds TT; this presents several key challenges:

  • Curse of Dimensionality: When pTp \gg T, naïve least-squares estimation in the linear reward model μt(i)=θ(i),xt(i)\mu_t^{(i)} = \langle\theta^{(i)}, x_t^{(i)}\rangle is severely underdetermined, precluding consistent parameter estimation or meaningful confidence intervals.
  • Structural Assumptions: To render the problem tractable, prior work has assumed either:
    • Parameter sparsity, where each unknown coefficient vector θ(i)\theta^{(i)} is sparse; or
    • Spectral sparsity, where the context covariance matrices Σ(i)\Sigma^{(i)} have only a few large eigenvalues.
  • Homogeneity vs. Heterogeneity: Existing methods predominantly address homogeneous settings: either all arms are sparse (for example, Lasso-ETC, [1907.11362][\text{1907.11362}]), or all context covariance matrices are low-rank (e.g., ridgeless least-squares estimators, [2306.11017][\text{2306.11017}]). In practice, however, these structures can co-occur or be mixed, leading to heterogeneous regimes not handled by earlier estimators.

The high-dimensionality challenge is thus compounded if arms exhibit different forms of sparsity or when both parameter and spectral forms are present within a single problem instance.

2. Adaptive Pointwise Estimation and the HOPE Algorithm

To address the limitations arising from rigid, single-structure estimators, a pointwise estimator (PWE) is introduced to adaptively incorporate both parameter and spectral sparsity, which crucially accommodates mixed and heterogeneous sparsity regimes (Zhao et al., 9 Oct 2025):

  • Support Estimation: Determine a candidate support S1S_1 (containing the true support S0S_0 if θ(i)\theta^{(i)} is sparse) using variable selection (e.g., Lasso or sure independence screening).
  • Dimension Reduction: Restrict or truncate both the context xx and any initial parameter estimate to S1S_1, effectively reducing the estimation problem’s dimension.
  • Model Transformation: Decompose the context via projection and augment the model with an invertible transformation Γt(i)\Gamma_t^{(i)} (constructed from the spectral information of the contexts XX) to sparsify the resultant nuisance term in the reward model.
  • Low-dimensional Estimation: Solve a penalized (e.g., Lasso) regression on the N+1N+1-dimensional transformed model to estimate a scaling parameter αt(i)\alpha_t^{(i)}, yielding a pointwise reward estimate

μ^t(i)=α^t(i)Nx22Xx2.\widehat{\mu}_t^{(i)} = \widehat{\alpha}_t^{(i)} \cdot \frac{\sqrt{N}\|x\|_2^2}{\|X x\|_2}.

This PWE thus generalizes classical reward prediction: if the model is parameter-sparse, it delivers the same regret scaling as Lasso-ETC; if the eigenvalues decay quickly, it achieves the regret rates of spectral methods (ridgeless least squares); in mixed or heterogeneous cases, it adapts to the most favorable structure available.

The HOPE algorithm (“High-dimensional linear cOntextual bandits with Pointwise Estimator”) leverages this strategy in an Explore-Then-Commit (ETC) framework:

  • In the exploration phase, each arm is selected in round-robin fashion to gather independent samples.
  • In the exploitation phase, the PWE is instantiated (with either Lasso or RDL as initialization, chosen per-arm according to empirical suitability), allowing heterogeneous treatment across arms.
  • At each exploitation round, PWE is used to estimate each arm's reward, and the arm with the highest estimate is selected.

3. Regret Bounds Across Structural Regimes

By combining variable selection and spectral reduction, HOPE achieves regret guarantees that generalize and improve upon prior methods:

  • Parameter-Sparse Settings: For arms with s0s_0-sparse parameters, the cumulative regret scales as

R(T)=O(K1/3s01/3T2/3polylog(T)),R(T) = O\left(K^{1/3} s_0^{1/3} T^{2/3} \cdot \text{polylog}(T)\right),

which matches the best known results from Lasso-based ETC methods in the high-dimensional sparse regime (Kim et al., 2019).

  • Spectral (Covariance-Sparse) Settings: When context covariance matrices have rapidly decaying eigenvalues (approximate low-rank), using RDL as the initial estimator provides a bound of order

R(T)=O~(max{K1/2p1/(2Ta)T(a+2)/4,K1/3p2/(3Ta)T(2a)/3}),R(T) = \widetilde{O}\left(\max\left\{K^{1/2} p^{1/(2T^a)} T^{(a+2)/4}, K^{1/3} p^{2/(3T^a)} T^{(2-a)/3}\right\}\right),

with aa determined by the eigenvalue decay.

  • Mixed Sparsity: In regimes where both forms of sparsity co-exist (e.g., θ(i)\theta^{(i)} is s0s_0-sparse and Σ(i)\Sigma^{(i)} has fast decaying eigenvalues), the regret improves to

R(T)=O~(K1/3M2/3T2/3),R(T) = \widetilde{O}(K^{1/3} M^{2/3} T^{2/3}),

with MM capturing the effective rank over the estimated support.

  • Heterogeneous Settings: With arms divided into different structural classes (e.g., some sparse, some low-rank), HOPE applies the appropriate estimator per-arm; the overall regret is then determined by the worse of the two class-specific rates. This is the first method to provide regret guarantees in such mixed-structure regimes.

The regret analysis is supported by nonasymptotic error bounds for the PWE, incorporating both estimation (support recovery) and transformation errors.

4. Practical Adaptivity and Experimental Results

Empirical studies robustly support the theoretical claims:

  • In homogeneous settings (either all model- or covariance-sparse), HOPE matches the performance of state-of-the-art specialized algorithms (e.g., Lasso-ETC, RDL-ETC).
  • In mixed or heterogeneous cases, where previous methods fail due to model mismatch, HOPE consistently outperforms alternatives by selecting the appropriate estimator for each arm.
  • Experiments test scenarios with varying sparsity ratios, context spectra, and noise levels, confirming that HOPE delivers lower mean regret and reduced variance. Key performance cases assessed include: sparse θ\theta with identity covariances, dense θ\theta with decaying spectral covariances, and mixed sparsity across arms.

5. Mathematical Formulation and Pointwise Estimation Procedure

The central estimator operates as follows:

  • Given arm ii and context xt(i)x_t^{(i)} at round tt, after support estimation and model transformation:

    • Form the model:

    y=Nαt(i)zt(i)+Nξt(i)+ε,y = \sqrt{N}\alpha_t^{(i)}z_t^{(i)} + \sqrt{N}\xi_t^{(i)} + \varepsilon,

    where αt(i)\alpha_t^{(i)} scales the reward, zt(i)z_t^{(i)} is transformed data, and ξt(i)\xi_t^{(i)} is a transformed nuisance term. - Run Lasso on the (N+1)(N+1)-dimensional data to solve

    β^t(i)=argminβRN+11Nyβ22+λt(i)β1,\widehat{\beta}_t^{(i)} = \arg\min_{\beta \in \mathbb{R}^{N+1}} \frac{1}{N}\|y - \beta\|_2^2 + \lambda_t^{(i)}\|\beta\|_1,

    and obtain the final estimate as

    μ^t(i)=α^t(i)Nx22Xx2.\widehat{\mu}_t^{(i)} = \widehat{\alpha}_t^{(i)} \frac{\sqrt{N}\|x\|_2^2}{\|X x\|_2}.

  • Regret analysis leverages model-dependent quantities such as

MS1=maxitr(ΣS1(i))ΣS1(i)F,M_{S_1} = \max_{i} \frac{\operatorname{tr}(\Sigma_{S_1}^{(i)})}{\|\Sigma_{S_1}^{(i)}\|_F},

leading to bounds on the prediction error of the PWE and the resultant cumulative regret.

6. Implications and Future Directions

The HOPE algorithm’s adaptive navigation of both parameter and spectral sparsity substantially broadens the applicability of high-dimensional contextual bandit algorithms, including:

  • Real-world settings with unknown or heterogeneous structure, such as recommendation, personalized medicine, or online advertising, where some arms or contexts are governed by sparse signals while others are best described by low-rank or spectrally sparse representations.
  • A pathway for further generalizations, including extensions to nonlinear models (e.g., kernel methods, neural representations), development of new exploration strategies (not limited to ETC), and applications to full reinforcement learning scenarios where context/state spaces are high-dimensional and partial feedback is present.

A plausible implication is that the PWE-based approach can be integrated with UCB or Thompson Sampling frameworks if appropriate uncertainty quantification is extended to the transformed models. Future work is anticipated in adaptive exploration regimes and generalized context–reward structures.


This synthesis captures the key innovations of adapting pointwise estimation for mixed sparsity in high-dimensional linear contextual bandits. By supporting both homogeneous and heterogeneous structural assumptions, and providing matching regret guarantees across settings, this framework aligns with current trends in arXiv literature and recent advances in high-dimensional online learning (Zhao et al., 9 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to High-dimensional Linear Contextual Bandit Problems.