Papers
Topics
Authors
Recent
2000 character limit reached

Supervisory Prompt Training (SPT)

Updated 28 November 2025
  • Supervisory Prompt Training (SPT) is an automated strategy for optimizing prompts using dual-agent collaboration and soft prompt tuning.
  • It integrates discrete textual and embedding-based prompts to improve generalization, reduce hallucinations, and address low-resource scenarios.
  • Empirical evaluations show SPT delivers significant accuracy gains on benchmarks like GSM8K and TruthfulQA with enhanced parameter efficiency.

Supervisory Prompt Training (SPT) constitutes a family of automated strategies for prompt optimization in large neural language and speech models. Originating in response to the limitations of manual prompt engineering and parameter-intensive fine-tuning, SPT leverages both discrete (textual) and soft (embedding-based) prompt formulations to enhance model generalization, reduce hallucination, and address low-resource or code-switching scenarios with parameter efficiency. Current instantiations of SPT operate in both black-box settings—utilizing dual LLM agents for text prompt evolution—and white-box model access regimes, employing trainable soft prompts integrated with Transformer architectures for multilingual speech recognition.

1. Dual-Agent Supervisory Prompt Training for LLMs

SPT as proposed in "Supervisory Prompt Training" introduces a collaborative system comprising a generator LLM (GG) and a corrector LLM (CC). At each iteration, GG generates task responses using its current prompt pip_i, identifying a set of errors mim_i. CC, initialized with its own meta-prompt cic_i, observes pip_i and mim_i and synthesizes new prompt candidates {pi1,,pin}\{p_i^1, \dotsc, p_i^n\}. These candidates are scored by re-applying GG on mim_i, and the best candidate (maximizing accuracy on mim_i) is selected as pi+1p_{i+1}. Optionally, ci+1c_{i+1} is refined using feedback on differences between pip_i, pi+1p_{i+1}, and the associated errors.

This cycle continues until a stopping criterion (e.g., no further accuracy improvement, prompt convergence) is met. SPT operates over explicit, fully-interpretable prompts for LLMs and does not require gradient access, enabling compatibility with proprietary black-box APIs (Billa et al., 26 Mar 2024).

Algorithmic Variants

Two main algorithmic regimes are formalized:

  • SPT-p: Only the generator's prompt pp is updated.
  • SPT-pc: Both generator pp and corrector meta-prompt cc are jointly refined.

Extended variants include SPT-cot (where CC outputs chain-of-thought rationales) and SPT-imp (impact-score-guided editing).

2. Impact Scores and Sentence-Level Prompt Attribution

A central feature of SPT is the formal impact score Δ(s)\Delta(s) for each sentence ss in a candidate prompt, defined as:

Δ(s)=Acc(Dtrain;pk{s})Acc(Dtrain;pk)\Delta(s) = \mathrm{Acc}\bigl(D_\text{train}\,;\,p_k \oplus \{s\}\bigr) - \mathrm{Acc}(D_\text{train}\,;p_k)

where pk{s}p_k \oplus \{s\} denotes prompt pkp_k with sentence ss appended. These scores quantify incremental training accuracy gain attributable to individual prompt components. In SPT-imp, CC incorporates {(s,Δ(s))}\{(s, \Delta(s))\}, biasing the prompt generation process toward high-leverage instructions and demoting low- or negative-impact content (Billa et al., 26 Mar 2024).

3. Methodological Comparison to Prior Prompt and Fine-Tuning Approaches

SPT distinguishes itself from gradient-based soft prompt methods (which require internal model access and produce non-interpretable embeddings) and prior LLM-based prompt engineering techniques (which lack dual-agent feedback and meta-prompt evolution).

Key differentiators:

  • Interpretability: SPT develops textual prompts amenable to direct human analysis.
  • Mutual Improvement: Both generator and corrector can improve, fostering stronger prompt search.
  • No Gradient Requirement: SPT functions in API-only settings.
  • Sentence-Level Attribution: Impact scores enable fine-grained, quantitative prompt assessment.

Tradeoffs include higher computational cost (due to repeated LLM calls) and risks of prompt overfitting with long or highly specific instructions (Billa et al., 26 Mar 2024).

4. Quantitative Evaluation and Experimental Findings

SPT was evaluated on multiple LLMs (GPT-3.5, GPT-4, Llama2-70b) and datasets (TruthfulQA, GSM8K, MMLU, MedQA-US). Notable results include a 28.3% absolute accuracy increase on GSM8K for GPT-4 (65.8% baseline to 94.1% for SPT-pc), and consistent outperformance over Automatic Prompt Optimization (APO), which achieved 68.8% on the same benchmark. On TruthfulQA, SPT-p yielded 89.6% against APO's 87.1%. Similar relative trends were observed across other benchmarks.

Model (G) Dataset Baseline APO SPT-p SPT-pc
GPT-4 GSM8K 65.8% 68.8% 89.6% 94.1%
GPT-4 TruthfulQA 81.7% 87.1% 89.6% 87.1%
GPT-4 MMLU 79.7% 79.6%
GPT-4 MedQA 78.4% 79.3% 78.7% 77.3%

The architecture enables substantial reduction in LLM hallucination and elevated generalization across tasks without traditional model fine-tuning (Billa et al., 26 Mar 2024).

5. Soft Prompt Tuning SPT in Multilingual Speech Recognition

In whisper-based multilingual ASR, SPT refers to a parameter-efficient class of methods wherein small blocks of learnable prompt embeddings are prepended to input representations of frozen or partially-tuned models (Yang et al., 16 Jun 2025, Yang et al., 16 Jun 2025).

Variants

  • Vanilla SPT: Single prompt matrix prepended to encoder and/or decoder inputs.
  • Deep Prompt Tuning (DPT): Prompts inserted at each transformer block throughout both encoder and decoder (recommended for maximal gains).
  • Residual Prompt Tuning (ResPT): Prompt banks parameterized via shared MLPs, improving convergence and parameter sharing.
  • Language Prompt Tuning (LPT): Employs pre-trained language embeddings as auxiliary prompts to disambiguate code-switching.
  • SPT4ASR (Hybrid): Concatenates DPT, ResPT, and LPT for further error reduction.
  • Entire SPT: Simultaneous prompting of both encoder and decoder, shown to outperform decoder-only SPT in language expansion tasks (Yang et al., 16 Jun 2025).

Formulation

If XRl×eX \in \mathbb{R}^{l \times e} is the encoder input, SPT prepends a prompt PRn×eP \in \mathbb{R}^{n \times e}: X=[P;X]X' = [P; X]. In the decoder, prompts PP' are prepended analogously.

The training objective (autoregressive) is:

L(P)=(X,y)Dt=1Tlogp(yty<t,[P,X],θ)L(P) = - \sum_{(X,y) \in D} \sum_{t=1}^{T} \log p(y_t \mid y_{<t}, [P,X], \theta)

where in vanilla SPT θ\theta is frozen and only PP is updated; in full fine-tuning, both are trained (Yang et al., 16 Jun 2025, Yang et al., 16 Jun 2025).

6. Empirical Performance and Comparative Efficiency

On SEAME and ASRU2019 (code-switching ASR), SPT-augmented Whisper achieves strong gains relative to LoRA and full fine-tuning, with substantially lower parameter budgets. DPT and SPT4ASR approaches achieve mix error rates (MER) within 1–2% of full-model fine-tuning with only 0.2–3.7M additional parameters versus 240M for FFT. Catastrophic forgetting on high-resource monolingual tasks is minimal for SPT and LoRA, while full fine-tuning yields pronounced degradation (Yang et al., 16 Jun 2025).

Method Params Added (M) SEAME MER (%) (devman/devsge) ASRU2019 MER (%)
FFT 240.6 13.37 / 19.42 12.92
LoRA 1.85 13.96 / 20.59 13.00
SPT4ASR 3.74 15.48 / 21.98 13.12
Vanilla SPT 0.20 21.95 / 27.39 15.60

Whole-model SPT and LoRA both preserve near-baseline performance on monolingual datasets, contrasting with FFT's catastrophic forgetting (Yang et al., 16 Jun 2025).

7. Practical Implementation and Guidelines

  • Prompt Length: n=128n=128 is optimal for combined encoder/decoder prompts; shorter lengths underfit, longer exceed context window.
  • Insertion Depth: DPT mandates prompt injection at every Transformer block.
  • Learning Rate: 10310^{-3} for prompts; 10610^{-6} if jointly fine-tuning backbone parameters.
  • Training Protocols: 10 epochs, batch size 8, for datasets spanning 100–200 hours.
  • Parameter and Bandwidth Efficiency: SPT uses ≈$1/50$–$1/100$ of FFT parameters, with slightly higher computational costs than LoRA but marked gains in generalization.

SPT-Whisper, an open-source PyTorch library, supplies hooks for all SPT variants, P-Tuning v2, LAPT, LoPT, and ResMLP modules, supporting continual learning with minimal memory and latency overhead (\sim0.07% increment per new language; +1+1–$2$% inference time; +50+50MB RAM per three new languages) (Yang et al., 16 Jun 2025).


Supervisory Prompt Training thus unifies adaptive, automated prompt optimization (for LLMs) and parameter-efficient soft prompt specialization (for ASR) under a rigorous, scalable paradigm that addresses both interpretability and resource constraints in modern multilingual and multi-task foundation models (Billa et al., 26 Mar 2024, Yang et al., 16 Jun 2025, Yang et al., 16 Jun 2025).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Supervisory Prompt Training (SPT).