Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SQUARE: Automatic Question Answering Evaluation using Multiple Positive and Negative References (2309.12250v1)

Published 21 Sep 2023 in cs.CL and cs.LG

Abstract: Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions. Recent works (AVA, BEM) have shown that transformer LM encoder based similarity metrics transfer well for QA evaluation, but they are limited by the usage of a single correct reference answer. We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation), using multiple reference answers (combining multiple correct and incorrect references) for sentence-form QA. We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems, across multiple academic and industrial datasets, and show that it outperforms previous baselines and obtains the highest correlation with human annotations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Matteo Gabburo (7 papers)
  2. Siddhant Garg (23 papers)
  3. Rik Koncel Kedziorski (1 paper)
  4. Alessandro Moschitti (48 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.