Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Attentive Model for Headline Generation (1901.07786v1)

Published 23 Jan 2019 in cs.CL and cs.AI

Abstract: Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Daniil Gavrilov (18 papers)
  2. Pavel Kalaidin (4 papers)
  3. Valentin Malykh (24 papers)
Citations (49)

Summary

We haven't generated a summary for this paper yet.