Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Beam Search Policies via Imitation Learning (1811.00512v2)

Published 1 Nov 2018 in stat.ML, cs.AI, and cs.LG

Abstract: Beam search is widely used for approximate decoding in structured prediction problems. Models often use a beam at test time but ignore its existence at train time, and therefore do not explicitly learn how to use the beam. We develop an unifying meta-algorithm for learning beam search policies using imitation learning. In our setting, the beam is part of the model, and not just an artifact of approximate decoding. Our meta-algorithm captures existing learning algorithms and suggests new ones. It also lets us show novel no-regret guarantees for learning beam search policies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Renato Negrinho (8 papers)
  2. Matthew R. Gormley (22 papers)
  3. Geoffrey J. Gordon (30 papers)
Citations (27)