Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Global Thread-Level Inference for Comment Classification in Community Question Answering (1911.08755v1)

Published 20 Nov 2019 in cs.CL, cs.AI, cs.IR, and cs.LO

Abstract: Community question answering, a recent evolution of question answering in the Web context, allows a user to quickly consult the opinion of a number of people on a particular topic, thus taking advantage of the wisdom of the crowd. Here we try to help the user by deciding automatically which answers are good and which are bad for a given question. In particular, we focus on exploiting the output structure at the thread level in order to make more consistent global decisions. More specifically, we exploit the relations between pairs of comments at any distance in the thread, which we incorporate in a graph-cut and in an ILP frameworks. We evaluated our approach on the benchmark dataset of SemEval-2015 Task 3. Results improved over the state of the art, confirming the importance of using thread level information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shafiq Joty (187 papers)
  2. Giovanni Da San Martino (43 papers)
  3. Simone Filice (9 papers)
  4. Lluís Màrquez (31 papers)
  5. Alessandro Moschitti (48 papers)
  6. Preslav Nakov (253 papers)
  7. Alberto Barrón-Cedeño (25 papers)
Citations (50)

Summary

We haven't generated a summary for this paper yet.