Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 70 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 14 tok/s Pro
GPT-5 High 14 tok/s Pro
GPT-4o 72 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 449 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Enriched Stack Traces for Advanced Debugging

Updated 20 September 2025
  • Enriched stack traces are extended runtime call chains that integrate additional contextual data such as invocation details, error messages, and user interactions.
  • They utilize methodologies like program instrumentation, probabilistic modeling, and deep learning to improve error categorization, duplicate detection, and trace comparison.
  • Applications include faster fault localization, enhanced crash report deduplication, and automated fault diagnosis, thereby boosting software maintenance and testing efficiency.

Enriched stack traces are extensions of traditional stack traces that combine runtime call-chain information with additional contextual data, structural enhancements, or statistical modeling to improve software debugging, regression testing, crash deduplication, and automated fault diagnosis. By augmenting each trace element with auxiliary data such as the invocation context, weighted frame importance, user interaction signals, or deep model representations, enriched stack traces facilitate deeper error categorization, more rapid root cause localization, and increased automation in large-scale systems. Their deployment spans methodologies from program instrumentation to deep learning, information retrieval, and probabilistic topic modeling, and is foundational in modern software quality assurance.

1. Conceptual Foundations of Enriched Stack Traces

The extension of the stack trace paradigm centers on embedding supplementary information within each trace element. In classical regression testing frameworks—such as POI (Point of Interest) testing (Pérez et al., 2018)—a trace element (TE) is typically a tuple (POI,value)(POI, value). Enrichment transforms this into a triplet (POI,value,ai)(POI, value, ai), where aiai is an additional-information mapping, most notably a stack trace captured via runtime mechanisms (e.g., Erlang’s erlang:get_stacktrace/0). Beyond this, aiai may store call arguments, error messages, execution contexts, or dynamic data from production traces.

This enrichment allows developers to directly associate each observed value with its complete invocation context. The resultant trace is then exploited using custom comparison functions, such as VEF({POI, V, AI}) = {V, dict:fetch(st, AI)} for value extraction and advanced trace-element comparisons distinguishing errors arising from identical or divergent stack traces.

2. Methodologies for Trace Enrichment and Context Capture

Approaches to stack trace enrichment are diverse, bridging static and dynamic analysis, probabilistic topic modeling, and representation learning:

  • Instrumentation and Transformation: Code is instrumented so that, at every POI, both value and stack trace are captured and transmitted (e.g., tracer ! {add, POI, fvref, fv, ST}). Manual stack trace construction, including insertion of markers (“begin”/“end” around function calls), ensures hierarchical fidelity even in the presence of last-call optimizations.
  • Probabilistic Modeling: Hierarchical topic models, such as the Nested Hierarchical Dirichlet Process (NHDP) (Chen et al., 2019), treat enriched stack traces and user interactions as “words” within a tree-structured usage context model. This enables mining of co-occurring behaviors and exception patterns in large-scale telemetry.
  • Similarity Metrics and Machine Learning: TF-IDF–based weighting schemes and modified Levenshtein distance functions as seen in TraceSim (Vasiliev et al., 2020) demote frequent frames, highlight positional importance, and provide normalized similarity scores for near-duplicate trace detection in crash triage.
  • Deep Learning and Embedding Models: Neural architectures including Siamese biLSTM encoders (S3M (Khvorov et al., 2021)), transformer-adapted networks (dedupT (Mamun et al., 26 Aug 2025)), and ensemble HMMs (EnHMM (Islam et al., 2021)) compute representations that capture sequential, contextual, and semantic interdependencies across stack frames. Byte Pair Encoding (BPE) and cross-encoder reranking further improve accuracy on large industrial datasets (SlowOps (Shibaev et al., 19 Dec 2024)).

3. Applications in Debugging, Deduplication, and Regression Testing

Enriched stack traces enable:

  • Earlier and More Precise Error Localization: By categorizing discrepancies as “diff_value_same_stack_trace” or “diff_value_diff_stack_trace” and including invocation context, developers quickly identify whether bugs originate at the call site, in arguments, or within called functions (Pérez et al., 2018).
  • Crash Report Deduplication: Deduplication systems, such as TraceSim, S3M, Aggregation Model (Karasov et al., 2022), and dedupT, leverage enriched stack traces to cluster crash reports, using advanced similarity metrics, temporal features (e.g., w(Sq,Sg)=1/[log(TqTg+1)+1]w(S_q, S_g) = 1/[\log(|T_q - T_g| + 1) + 1]), and deep neural models to increase recall and ranking accuracy by up to 15 percentage points over state-of-the-art.
  • Automated Fault Diagnosis and Repair: LLM fine-tuning on mutation-generated stack traces (Jambigi et al., 29 Jan 2025) allows direct inference of fault locations from crash logs without explicit test failures or source code context, achieving up to 74% localization accuracy on various codebases.
  • Hierarchical Exception Modeling: NHDP-based models (Chen et al., 2019) extract hierarchies of usage contexts, enriching stack traces by mapping exceptions to their triggering user actions—revealing systemic behavior patterns.
  • Ensemble Predictive Systems: EnHMM exploits the sequential nature of stack trace call order to predict bug report field reassignments, outperforming classic ML approaches on F-measure and recall.

4. Impact on Software Maintenance, Testing, and Developer Productivity

Enriched stack trace methodologies have been empirically demonstrated to:

  • Reduce the manual overhead in crash report triage (dedupT (Mamun et al., 26 Aug 2025), S3M (Khvorov et al., 2021), Aggregation Model (Karasov et al., 2022)).
  • Accelerate debugging cycles by highlighting rare, diagnostic frames using IDF-based heuristics (Khvorov et al., 12 Jan 2025); frames with IDF(f)=log[N/(nf+1)]\mathrm{IDF}(f) = \log[N / (n_f + 1)] surface contextually relevant clues robust to corpus scale.
  • Allow regression testers and maintainers to iterate on error sources more efficiently by categorizing unexpected behaviors and tracing divergent execution paths (Pérez et al., 2018).
  • Enable domain-specific error analysis in ML applications (Ghadesi et al., 2023), where recurring trace patterns yield actionable taxonomies for resolving exceptions in sensitive domains such as data transformation or parallelization.

5. Technical Trade-offs and Performance Significance

Enrichment introduces computational and methodological trade-offs:

  • The enhancement of trace elements increases storage and processing costs, but provides significant recall, mean reciprocal rank (MRR), and precision improvement (e.g., SBEST—MAP and MRR improvement of 32.22% and 17.43%, respectively, over conventional rankings (Pacheco et al., 1 May 2024)).
  • Deep learning approaches, while accurate, may require substantial infrastructure (embedding model precomputation, reranking latency) but achieve real-time performance on scale (e.g., 8.7 ms per report in embedding models, 144.5 ms with reranker (Shibaev et al., 19 Dec 2024)).
  • The complexity of temporal and semantic aggregation (e.g., “Parametric Max-Mean” in dedupT) is justified by significant gains in unique crash detection, duplicate ranking, and overall triage quality.

6. Future Prospects, Data Release, and Research Directions

Emerging directions in enriched stack trace research include:

Research consensus evidences strong impacts for contemporary software engineering, with enriched stack trace techniques improving accuracy, speed, and developer assistance across debugging, maintenance, and reliability engineering. The sustained release of code, data, and open frameworks is expected to catalyze continued advancements in trace-based error diagnosis and automation.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Enriched Stack Traces.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube