Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HEVAL: Yet Another Human Evaluation Metric (1311.3961v1)

Published 15 Nov 2013 in cs.CL

Abstract: Machine translation evaluation is a very important activity in machine translation development. Automatic evaluation metrics proposed in literature are inadequate as they require one or more human reference translations to compare them with output produced by machine translation. This does not always give accurate results as a text can have several different translations. Human evaluation metrics, on the other hand, lacks inter-annotator agreement and repeatability. In this paper we have proposed a new human evaluation metric which addresses these issues. Moreover this metric also provides solid grounds for making sound assumptions on the quality of the text produced by a machine translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nisheeth Joshi (30 papers)
  2. Iti Mathur (23 papers)
  3. Hemant Darbari (5 papers)
  4. Ajai Kumar (14 papers)
Citations (15)