Papers
Topics
Authors
Recent
Search
2000 character limit reached

Two-Pass Lexical Correction

Updated 26 December 2025
  • Two-pass lexical correction is defined as a cascaded system where an initial module generates hypotheses and a secondary module refines them using rich linguistic context.
  • It improves error localization and correction in applications like ASR, speaker diarization, transliteration, and grammatical error correction by decoupling generation and correction tasks.
  • Key methodologies include speaker-feature fusion, hypothesis rescoring with LLMs, and span detection techniques, which together boost computational efficiency and correction precision.

Two-pass lexical correction refers to a family of cascaded architectures for detecting and correcting lexical errors, typically through a modular sequence of specialized subsystems. In its canonical form, the first pass identifies or generates hypothesized outputs (with potential errors) from noisy text or speech; the second pass applies a dedicated lexical correction model—often conditioned on first-pass outputs and rich linguistic context—to produce a refined final result. This paradigm enables error localization, modular error correction, and leverages explicit linguistic or contextual signals unavailable to the initial system, yielding state-of-the-art performance in domains such as speech recognition, speaker diarization, transliteration, and grammatical error correction.

1. General Principles and Canonical Frameworks

Two-pass lexical correction exploits architectural separation between an upstream generator (e.g., ASR, transliteration, or span tagging) and a dedicated lexical corrector. The prototypical design adopts the following workflow:

  1. First pass: Generate a hypothesis (ASR transcript, speaker-labeled transcript, transliteration, or span-annotated text) using a base model.
  2. Second pass: Apply a lexical correction module, conditioned on both the original hypothesis and auxiliary signals (e.g., speaker IDs, system scores, multiple hypotheses), to produce corrected output.

Key instantiations include:

This modularity enforces clear subgoals—localize vs. correct—which can be optimized and evaluated independently, enhancing interpretability and computational efficiency.

2. Architectures and Model Variants

Lexical Speaker Correction

The "Speaker Error Corrector" (SEC) (Paturi et al., 2023) operates over the output of a traditional ASR system paired with acoustics-only speaker diarization. Its architecture involves:

  • Input: Sequence of ASR words W1...WNW_1...W_N with speaker IDs S1...SNS_1...S_N.
  • Embedding: Tokenization into subwords; contextual embeddings via RoBERTa-base.
  • Speaker-feature fusion: Word-level speaker one-hots are mapped onto token-aligned vectors, which are concatenated to LM outputs.
  • Transformer front-end: A shallow encoder produces per-token logits over speaker assignments.
  • Inference: A sliding 30-word window corrects speaker labels when exactly two speakers are present, preserving robustness to local ambiguity.

Multi-Pass Generative Error Correction for ASR

The "Multi-Pass Augmented Generative Error Correction" (MPA GER) pipeline for Japanese ASR (Ko et al., 2024) features:

  • First pass: ASR generates N-best hypotheses; each is independently corrected by an LLM, producing candidate outputs scored by both ASR and LLM log-probabilities.
  • Second pass: Candidates are pooled with originals, rescored using an additional LLM and ROVER voting, and a weighted scoring function combines all sources.
  • Final selection: The candidate maximizing the aggregate score is selected as the final transcription.

Two-Pass Transliteration and Post-Correction

For Judeo-Arabic, transliteration to Arabic script proceeds as (Gonzalez et al., 7 Jul 2025):

  • First pass: Deterministic character-level mapping based on script-specific correspondences and diacritic handling.
  • Second pass: Post-correction treats output as "noisy Arabic," applying pretrained GEC systems (either seq2seq Transformers or the SWEET edit tagger) to clean orthographic and lexical errors.

Erroneous Span Detection and Correction for GEC

The span-based GEC system (Chen et al., 2020) decomposes correction:

  • Pass 1: Sequence tagging to identify erroneous spans via B/I/E/S/O schema.
  • Pass 2: Seq2seq models correct only the marked spans, with final output assembled by splicing corrections into the original.

3. Training Objectives and Data Strategies

Training objectives are tightly coupled to each pass's function:

  • Speaker correction: Cross-entropy loss over per-token speaker assignments, with label corruption simulating ASR and diarization noise for robustness (error-simulation curriculum) (Paturi et al., 2023).
  • ASR error correction: LLM fine-tuning via next-token prediction, possibly using LoRA adaptation, with scoring terms for ASR, LLM, secondary LM, and voting (Ko et al., 2024).
  • Transliteration post-correction: Sequence-to-sequence loss on aligned "noisy→gold" pairs; label smoothing and synthetic data augmentation enforce generalizability (Gonzalez et al., 7 Jul 2025).
  • Erroneous span detection/correction: Cross-entropy over span tags; seq2seq GEC with label-smoothed cross-entropy over only the detected error regions (Chen et al., 2020).

Pretraining and data augmentation strategies—such as simulated corruptions, synthetic noise injection, and back-translation—are extensively applied to enforce model robustness beyond the core data.

4. Evaluation Metrics and Empirical Performance

Evaluation metrics are tailored to the lexical nature of corrections and system goals.

Task Metric Baseline Post-Correction Typical Gains
Speaker diarization (Paturi et al., 2023) Word-level DER (WDER) 2.26% (Fisher) 1.53% (SimSEC_v2→RealSEC) 15–32% rel. reduction
Japanese ASR (Ko et al., 2024) Character Error Rate (CER) 12.91% (SPREDS) 7.07% (MPA GER, SPREDS) 40–50% rel. reduction
Judeo-Arabic transliteration (Gonzalez et al., 7 Jul 2025) MaxMatch F0.5F_{0.5} 40.4% (CharMapper, dotted) 62.3% (SWEET); 63.1% (GPT-4o) +20 F0.5F_{0.5} points
English GEC (Chen et al., 2020) F0.5F_{0.5} (CoNLL-14) 64–66 (seq2seq) 63–65 (span-based, ESD+ESC) ∼\simequal, 2×\times faster

Metrics such as WDER (jointly penalizing word and speaker errors), CER, and F0.5F_{0.5} (precision-weighted edit accuracy) dominate in these studies. Error reductions of 15–50% relative are typical with two-pass architectures, particularly in challenging regimes (e.g., high ASR error, ambiguous scripts) (Paturi et al., 2023, Ko et al., 2024, Gonzalez et al., 7 Jul 2025).

5. Ablations, Limitations, and Domain Adaptation

Ablation studies isolate contributions of each pass and architectural feature:

  • Speaker error correction: Fine-tuning LM on conversational text yields 0.1–0.2 point WDER improvements; correction saturates quickly with limited paired data, indicating robustness and sample efficiency (Paturi et al., 2023).
  • ASR MPA GER: ROVER voting plus LM-based rescoring is critical for suppressing LLM hallucinations; benefit persists even when one component system is weak or noisy (Ko et al., 2024).
  • Transliteration: Availability of script-specific diacritics (e.g., Hebrew upper-dot) yields significant accuracy gains. Each pipeline stage adds ∼20 F0.5F_{0.5} points; dotless input leads to a marked drop (Gonzalez et al., 7 Jul 2025).
  • GEC: Two-pass span systems achieve comparable F0.5F_{0.5} to monolithic seq2seq baselines with <50% inference time. Modularity enables explicit localization and efficient correction (Chen et al., 2020).

Limitations include error propagation across passes (especially undetected spans in GEC), suboptimal handling of global edits, and reliance on closed-source or non-reproducible LLMs in some domains. Downstream evaluations are sometimes performed on silver-standard or automatically-aligned references, and code-switching or domain adaptation beyond the core datasets remains an open challenge.

6. Theoretical and Practical Implications

Two-pass lexical correction architectures unify several disparate traditions in NLP: multi-stage rule-based systems, reranking, error detection/correction cascades, and LLM-based editing. By decoupling localization from correction, these systems exploit model specialization and open opportunities for statistical or neural voting. They demonstrate that:

  • Lexical context and explicit auxiliary signals (speaker IDs, scores, N-best lists) enhance error resolution beyond what is available to a monolithic system.
  • Post-correction lemmatizes the impact of upstream errors, mitigating acoustic or script ambiguities via contextualized language modeling.
  • Modular correction streams align trivially with downstream pipelines (e.g., morphosyntactic tagging, machine translation, further post-editing).

A plausible implication is that future low-resource and morphologically-rich domains will benefit disproportionately from two-pass correction schemes, especially when downstream tasks require high-fidelity lexical normalization or disambiguation. Further, the submodular nature of these pipelines allows rapid adaptation, replacement, or fine-tuning of only the error-prone module rather than costly retraining of all system components.

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Two-Pass Lexical Correction.