Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Recurrent Neural Network Encoder with Attention for Community Question Answering (1603.07044v1)

Published 23 Mar 2016 in cs.CL, cs.LG, and cs.NE

Abstract: We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Wei-Ning Hsu (76 papers)
  2. Yu Zhang (1400 papers)
  3. James Glass (173 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.