Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Syntax-Aware Language Models Using Knowledge Distillation (1906.06438v1)

Published 14 Jun 2019 in cs.CL and cs.LG

Abstract: Prior work has shown that, on small amounts of training data, syntactic neural LLMs learn structurally sensitive generalisations more successfully than sequential LLMs. However, their computational complexity renders scaling difficult, and it remains an open question whether structural biases are still necessary when sequential models have access to ever larger amounts of training data. To answer this question, we introduce an efficient knowledge distillation (KD) technique that transfers knowledge from a syntactic LLM trained on a small corpus to an LSTM LLM, hence enabling the LSTM to develop a more structurally sensitive representation of the larger training data it learns from. On targeted syntactic evaluations, we find that, while sequential LSTMs perform much better than previously reported, our proposed technique substantially improves on this baseline, yielding a new state of the art. Our findings and analysis affirm the importance of structural biases, even in models that learn from large amounts of data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Adhiguna Kuncoro (18 papers)
  2. Chris Dyer (91 papers)
  3. Laura Rimell (13 papers)
  4. Stephen Clark (38 papers)
  5. Phil Blunsom (87 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.