Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SemEval-2015 Task 3: Answer Selection in Community Question Answering (1911.11403v1)

Published 26 Nov 2019 in cs.CL, cs.AI, and cs.IR

Abstract: Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval-2015 Task 3 on "Answer Selection in cQA", which included two subtasks: (a) classifying answers as "good", "bad", or "potentially relevant" with respect to the question, and (b) answering a YES/NO question with "yes", "no", or "unsure", based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Preslav Nakov (253 papers)
  2. Lluís Màrquez (31 papers)
  3. Walid Magdy (41 papers)
  4. Alessandro Moschitti (48 papers)
  5. James Glass (173 papers)
  6. Bilal Randeree (2 papers)
Citations (144)

Summary

We haven't generated a summary for this paper yet.