Applying ReLog to Anomaly Detection and Performance Monitoring

Determine how to apply the ReLog runtime feedback-driven iterative logging statement generation framework to anomaly detection and performance monitoring tasks, including identifying and evaluating any necessary adaptations for these settings.

Background

The paper evaluates ReLog primarily on automated debugging tasks (defect localization and program repair) to measure whether generated logs improve downstream LLM-based debugging. Although this focus provides a rigorous benchmark for diagnostic utility, the authors note that other important log-driven applications include anomaly detection and performance monitoring.

Because ReLog’s iterative refinement loop and sufficiency evaluation are tailored to debugging, its generalizability to other operational tasks is not established in the current experiments. The authors explicitly state that extending ReLog to anomaly detection and performance monitoring remains an unresolved challenge, suggesting that further adaptations and validation are needed.

References

While application to anomaly detection or performance monitoring remains an open challenge, researchers can readily adapt our framework to these tasks or use it to automatically curate high quality logging datasets.

Logging Like Humans for LLMs: Rethinking Logging via Execution and Runtime Feedback  (2603.29122 - Wang et al., 31 Mar 2026) in Section 7, Threats to Validity — External Validity