Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Language Learned by an Active Question Answering Agent (1801.07537v1)

Published 23 Jan 2018 in cs.CL and cs.AI

Abstract: We analyze the language learned by an agent trained with reinforcement learning as a component of the ActiveQA system [Buck et al., 2017]. In ActiveQA, question answering is framed as a reinforcement learning task in which an agent sits between the user and a black box question-answering system. The agent learns to reformulate the user's questions to elicit the optimal answers. It probes the system with many versions of a question that are generated via a sequence-to-sequence question reformulation model, then aggregates the returned evidence to find the best answer. This process is an instance of \emph{machine-machine} communication. The question reformulation model must adapt its language to increase the quality of the answers returned, matching the language of the question answering system. We find that the agent does not learn transformations that align with semantic intuitions but discovers through learning classical information retrieval techniques such as tf-idf re-weighting and stemming.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Christian Buck (15 papers)
  2. Jannis Bulian (14 papers)
  3. Massimiliano Ciaramita (15 papers)
  4. Wojciech Gajewski (5 papers)
  5. Andrea Gesmundo (20 papers)
  6. Neil Houlsby (62 papers)
  7. Wei Wang (1793 papers)
Citations (5)