Feel-Good Thompson Sampling (FGTS)
- Feel-Good Thompson Sampling (FGTS) is a posterior-sampling method that adds an optimism bonus to classical TS, promoting aggressive exploration in decision-making problems.
- It achieves minimax-optimal regret in linear bandits, contextual dueling bandits, and reinforcement learning by biasing the posterior toward high-reward outcomes.
- FGTS employs advanced sampling techniques like LMC, MALA, and HMC, with extensions for smoothing and variance awareness to balance exploration and computational efficiency.
Feel-Good Thompson Sampling (FGTS) is a posterior-sampling-based methodology for sequential decision making that augments classical Thompson Sampling (TS) with an explicit optimism bonus. This construction incentivizes exploration more aggressively than standard TS by biasing the posterior toward high-reward explanations, achieving minimax-optimal regret guarantees in both contextual bandit and reinforcement learning settings under appropriate conditions. FGTS and its extensions have been systematically studied in both exact and approximate posterior regimes, with substantive implications for scalability, sampling algorithms, and empirical performance on bandit and reinforcement learning benchmarks.
1. Core Algorithmic Principle and Mathematical Formulation
Feel-Good Thompson Sampling operates within the contextual bandit framework. At each round :
- The agent observes a context , with a (possibly finite) action set ;
- For each arm , the associated feature is ;
- Pulling arm yields a reward , where (unknown).
Standard TS maintains a Bayesian posterior: and draws . The action choice is
FGTS modifies the likelihood through the inclusion of a feel-good bonus: with (bonus scale), (bonus cap), and the inverse noise variance. The posterior becomes: and the per-action bonus is . The action is selected via
A smoothed variant, SFG-TS, replaces the hard minimum: enabling gradient-based sampling in nonconvex neural bandits by smoothing the non-differentiability in the bonus.
2. Theoretical Regret Guarantees
FGTS admits minimax-optimal regret bounds in several regimes:
- Linear Bandits (Exact Posterior): For , with exact Gaussian posterior inference and well-chosen , the regret is
matching the information-theoretic lower bound (Zhang, 2021).
- Frequentist Regret with Finite Actions: With sub-Gaussian rewards and , applying FGTS with , , and appropriate yields
regret, attaining the minimax rate.
- Infinite/Linearly Embeddable Actions: Assuming a bilinear structure and , the regret is
in a -dimensional linear parametric setting.
These results are underpinned by a decoupling-coefficient analysis, which connects the regret decomposition to the analysis of online least squares, leveraging tools from the theory of online prediction and aggregation (Zhang, 2021, Li et al., 3 Nov 2025).
3. Practical Implementation and Sampling Algorithms
Implementation of FGTS for large-scale or non-Gaussian models (e.g., neural bandits) requires approximate sampling from the modified posterior. The main strategies are:
- Langevin Monte Carlo (LMC): For the loss , LMC iterates
with step-size and inverse temperature .
- Metropolis-Adjusted Langevin Algorithm (MALA): Applies an MH-corrected LMC step to improve mixing in ill-conditioned posteriors.
- Hamiltonian Monte Carlo (HMC): Especially effective for well-conditioned linear posteriors.
Computational cost depends on the model class and sampler:
- Exact linear FGTS: per round.
- LMC/MALA: per step, for practical mixing.
- HMC: per leapfrog step, .
Memory usage is dominated by the posterior structure and is especially significant when backpropagating through unrolled MCMC steps for neural networks.
Hyperparameter tuning for is typically not prohibitive; defaults such as and smoothing are sufficient in most settings (Anand et al., 21 Jul 2025).
4. Empirical Performance and Trade-offs
Comprehensive benchmarking across synthetic and real datasets highlights the following patterns (Anand et al., 21 Jul 2025):
- Exact Posterior Regimes (Linear/Logistic Bandits): FG-TS (MALA or HMC) yields 10–20% lower cumulative regret vs. TS or LinUCB; SFG-TS matches or slightly improves over closed-form LinTS in logistic bandits.
- Approximate Posterior Regimes (Neural Bandits, Stochastic-Gradient MCMC): FG-TS can degrade regret due to amplification of sampling noise (especially with large bonuses), leading to instability in neural settings. Vanilla stochastic-gradient LMC-TS is typically more reliable for these cases.
- Bonus Scale Sensitivity: is generally optimal; larger values () harm regret unless the sampler approximates the posterior very accurately.
- Preconditioning and Prior Strength: HMC benefits from preconditioning for ill-conditioned problems, but aggressive preconditioning can be detrimental in LMC without MH filtering. Mild prior regularization stabilizes exploration, but excessively tight priors suppress effective exploration.
Empirical ablations confirm that the exploration benefit of the FGTS bonus is meaningful only when the posterior approximation is reliable—otherwise, the bonus exacerbates estimation noise and leads to erratic exploration.
5. Extensions: Smoothed, Variance-Aware, Dueling, and RL Regimes
FGTS forms the basis for multiple variants and extensions:
- Smoothed FGTS (SFG-TS): Substitutes with a differentiable soft maximum to facilitate MCMC in models with nondifferentiable activations.
- Variance-Aware FGTS (FGTS-VA): Weights exploration and loss contributions by observed noise variances, with regret in the finite model class case—matching state-of-the art UCB-based rates for weighted linear bandits (Li et al., 3 Nov 2025).
- FGTS for Contextual Dueling Bandits: Adapts the FG bonus to the dueling bandit formulation, leverages conditional independence of sampled arms, and achieves minimax-optimal regret (Li et al., 9 Apr 2024).
- Reinforcement Learning (RL): The FGTS principle applies to linear MDPs and general RL by introducing a feel-good prior at initial stages and using squared Bellman error in subsequent steps. Empirical results with approximate sampling (LMC/ULMC) integrated into DQN architectures demonstrate superior deep exploration and performance on RL benchmarks (e.g., N-chain, Atari hard exploration games) (Ishfaq et al., 18 Jun 2024).
The table summarizes key variants and their regret guarantees:
| FGTS Variant | Setting | Regret Bound |
|---|---|---|
| FG-TS (base) | Linear bandit (exact) | |
| SFG-TS | Logistic/neural bandit | (with accurate sampler) |
| FGTS-VA | Linear bandit, var. aware | |
| FGTS.CDB | Contextual dueling bandit | |
| FGTS-RL | Linear MDP |
6. Implementation Guidance and Recommendations
Best practices for applying FGTS and its variants are well-characterized (Anand et al., 21 Jul 2025, Ishfaq et al., 18 Jun 2024):
- Linear/Logistic Bandits (Exact): Prefer FG-TS or SFG-TS with , , use MALA or HMC sampling.
- Neural/Approximate Settings: Default to LMC-TS or neural-specific methods; small or zero FG bonuses are safer.
- Smoothing: Use SFG-TS with for nondifferentiable models.
- Parameter Tuning: Small grid search over , moderate regularization; tuning can usually be limited to a narrow parameter regime.
- Computational Cost: Consider sampler selection based on mixing efficiency versus implementation complexity; ULMC is preferable in strongly log-concave posterior landscapes for accelerated mixing.
- Empirical Reliability: Aggressive optimism bonuses are only beneficial when the posterior sampling is accurate; otherwise, bonus-driven exploration should be moderated.
- Open-source Reference: Code and experimental framework are available at https://github.com/SarahLiaw/ctx-bandits-mcmc-showdown, facilitating reproducibility in bandit experiments.
7. Connections, Limitations, and Perspective
FGTS provides a unifying framework for optimism-driven exploration in posterior sampling. The decoupling coefficient theory undergirds minimax-optimal regret in both bandit and RL contexts. However, empirical investigations reveal that the core advantage of FGTS—bonus-driven optimism—is contingent upon high-fidelity sampling. In high-noise or large-scale neural regimes, excessive optimism can degrade performance. A plausible implication is that scalability to deep models is fundamentally limited by sampler fidelity and the stability of the modified posterior. Thus, FGTS serves as a robust, theoretically grounded baseline in medium-scale linear/logistic environments with exact or accurate approximate posteriors, but should be employed with caution in high-dimensional, low-sample accuracy scenarios.
FGTS and its smoothed/variance-aware extensions represent significant theoretical and empirical milestones in posterior sampling and exploration research. They establish a rigorous bridge between optimism-in-the-face-of-uncertainty and randomized exploration, highlight the necessity of sampling accuracy, and provide practical algorithmic pathways for a wide range of contextual decision-making problems.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free