Papers
Topics
Authors
Recent
AI Research Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 43 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 112 tok/s Pro
Kimi K2 199 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Instruction Following by Boosting Attention of Large Language Models (2506.13734v2)

Published 16 Jun 2025 in cs.CL, cs.AI, and cs.LG

Abstract: Controlling the generation of LLMs remains a central challenge to ensure their safe and reliable deployment. While prompt engineering and finetuning are common approaches, recent work has explored latent steering, a lightweight technique that alters LLM internal activations to guide generation. However, subsequent studies revealed latent steering's effectiveness to be limited, often underperforming simple instruction prompting. To address this limitation, we first establish a benchmark across diverse behaviors for standardized evaluation of steering techniques. Building on insights from this benchmark, we introduce Instruction Attention Boosting (InstABoost), a latent steering method that boosts the strength of instruction prompting by altering the model's attention during generation. InstABoost combines the strengths of existing approaches and is theoretically supported by prior work that suggests that in-context rule following in transformer-based models can be controlled by manipulating attention on instructions. Empirically, InstABoost demonstrates superior control success compared to both traditional prompting and latent steering.

Summary

  • The paper introduces InstABoost, a method that boosts LLM attention on instructions to improve control and task performance.
  • The method manipulates internal attention distributions to enhance in-context rule following with minimal computational cost.
  • Experimental results show InstABoost outperforms standard prompting and other latent steering techniques across diverse tasks.

Instruction Following via Attention Boosting in LLMs

Controlling the behavior of LLMs is crucial for their safe and reliable deployment. While prompt engineering and fine-tuning are common strategies, latent steering offers a lightweight alternative by manipulating internal activations. This paper introduces Instruction Attention Boosting (InstABoost), a novel latent steering method that amplifies the effect of instruction prompting by modulating the model's attention mechanism during generation. InstABoost leverages theoretical insights suggesting that attention on instructions governs in-context rule following in transformer models. Empirical results demonstrate that InstABoost achieves superior control compared to traditional prompting and existing latent steering techniques.

Background on Steering Methods

Steering methods aim to guide the behavior of generative models by encouraging desirable outputs and suppressing undesirable ones. These methods broadly fall into two categories: prompt-based steering and latent space steering. Prompt-based steering uses natural language instructions within the input prompt, while latent space steering directly intervenes on the model's internal representations during generation. Despite advancements in both categories, challenges remain in understanding their efficacy and limitations. The paper argues that the effectiveness of steering methods is closely tied to the task itself, and that simple prompt-based methods can be remarkably effective, especially when augmented with targeted adjustments to the model's internal processing.

Instruction Attention Boosting (InstABoost)

Figure 1

Figure 1: Illustration of InstABoost which steers LLM behavior by increasing the attention mass onto the tokens corresponding to a prepended instruction.

InstABoost is motivated by the observation that in-context rule following in transformer-based models can be controlled by manipulating attention on instructions. The approach treats instructions as in-context rules and boosts the LLM's attention to these rules to steer generations towards a target behavior. Given a tokenized instruction prompt p=(p1,…,pK)p=(p_1, \dots, p_K) of length KK and an input query x=(x1,…,xL)x=(x_1, \dots, x_{L}) of length LL, the method first forms a combined input sequence x′=p⊕x=(p1,…,pK,x1,…,xL)x' = p \oplus x = (p_1, \dots, p_K, x_1, \dots, x_{L}). Within each Transformer layer ℓ\ell, InstABoost modifies the attention distribution α\alpha to increase the weights assigned to the prompt tokens by defining unnormalized, but boosted attention scores:

βij={αij⋅Mif 0≤j<K αijif K≤j<N.\beta_{ij} = \begin{cases} \alpha_{ij} \cdot M & \text{if } 0 \le j < K \ \alpha_{ij} & \text{if } K \le j < N. \end{cases}

These scores are then re-normalized to ensure a valid probability distribution, resulting in the final steered attention distribution β′\beta'. The output of the attention mechanism aℓa^\ell is computed using these re-normalized, steered attention weights β′\beta' and the unmodified value vectors VV: aℓ=β′Va^\ell = \beta' V. This approach amplifies the attention of a prepended prompt, effectively steering the model's output.

Experimental Results

The paper presents a systematic evaluation of InstABoost, comparing it against instruction-only prompting and various latent steering methods across a suite of diverse tasks. The tasks range from generating less toxic completions to changing the sentiment of open-ended generations. The experiments were conducted using the Meta-Llama-3-8B-Instruct model, with hyperparameters selected via held-out validation to maximize task accuracy while maintaining high generation fluency. Figure 2

Figure 2: InstABoost outperforms or matches all competing interventions. For each task, we show the accuracy of the model without intervention (red), the best-performing latent steering method (green), the instruction-only intervention (orange), and InstABoost (blue). Error bars show a standard deviation above and below the mean, computed by bootstrapping.

The results demonstrate that InstABoost either outperforms or matches the strongest competing method across all tasks. In tasks where instruction and latent steering had similar performance, InstABoost consistently performed well. In tasks where instruction prompting was superior to latent steering, InstABoost preserved and often enhanced this performance. Notably, in jailbreaking tasks, where the default model and instruction-only baseline had nearly zero accuracy, InstABoost achieved significantly higher accuracy than standard latent steering methods. Figure 3

Figure 3: Unlike other latent steering methods, InstABoost maintains high generation fluency while increasing task accuracy. The figure shows the fluency score (left) and accuracy (right) versus varying steering factors for the latent steering methods on AdvBench. For the latent steering methods, we show the effect of varying the steering factor in the best-performing layer.

Comparison to Existing Steering Methods

The paper highlights a significant drawback of latent-only methods: their performance fluctuates considerably by task. In contrast, InstABoost consistently achieves strong performance across all task types, offering a more robust and reliable approach to model steering. The results also indicate that InstABoost maintains high generation fluency, unlike other latent steering methods where increasing the steering factor to enhance task accuracy often results in a sharp decline in generation fluency. This is because InstABoost intervenes on attention mechanisms, offering a more constrained re-weighting of information flow that better preserves the model's generative capabilities.

The paper discusses prior work on latent steering, including methods that involve applying a derived steering vector to model activations. It also addresses attention steering methods, such as those by \citet{todd2024function} and \citet{zhang2024tell}, which leverage attention mechanisms for steering model behavior. The paper notes that these approaches often involve a grid search over all attention heads across all layers, incurring substantial computational costs. In contrast, InstABoost's hyperparameter tuning cost is minimal and constant, regardless of model size. Figure 4

Figure 4: Unlike head-based attention steering methods, InstABoost maintains a minimal and constant cost for hyperparameter tuning, regardless of model size. The plot displays the number of sample evaluations (y-axis) required for hyperparameter selection versus the total number of attention heads in a model (x-axis), assuming 100 validation samples.

Conclusion

The paper introduces InstABoost, a novel attention-based latent steering method that boosts attention on task instructions. The method is evaluated on a diverse benchmark with 6 tasks and is shown to outperform or match other latent steering methods and prompting on the tasks considered. InstABoost offers improved consistency across diverse task types and maintains high generation fluency. These findings suggest that guiding a model's attention can be an effective and efficient method for achieving more predictable LLM behavior, offering a promising direction for developing safer and more controllable AI systems.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.