Dice Question Streamline Icon: https://streamlinehq.com

Validate MetricX for ASR by establishing human-correlation

Determine the correlation between MetricX (metricx-23-xxl-v2p0) scores and human evaluations of automatic speech recognition outputs to assess the suitability of MetricX as a quality metric for ASR.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper uses MetricX, a machine translation evaluation metric trained on human MQM judgments, to assess overall output quality in ASR experiments. While MetricX has strong validation in machine translation, its applicability to ASR has not been established.

The authors explicitly note that MetricX’s correlation with human evaluation for ASR is unknown. Establishing this correlation would clarify whether MetricX can reliably serve as a general-purpose ASR quality metric beyond error-rate measures such as WER/CER and semantic metrics like SemDist.

References

Its correlation with human evaluation for ASR task is unknown, yet given its splendid accuracy in machine translation, it would be a useful metric for ASR.

Re-evaluating Minimum Bayes Risk Decoding for Automatic Speech Recognition (2510.19471 - Jinnai, 22 Oct 2025) in Section 4.1 (Automatic Speech Recognition), Evaluation metrics