Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Applying a Generic Sequence-to-Sequence Model for Simple and Effective Keyphrase Generation (2201.05302v1)

Published 14 Jan 2022 in cs.CL and cs.AI

Abstract: In recent years, a number of keyphrase generation (KPG) approaches were proposed consisting of complex model architectures, dedicated training paradigms and decoding strategies. In this work, we opt for simplicity and show how a commonly used seq2seq LLM, BART, can be easily adapted to generate keyphrases from the text in a single batch computation using a simple training procedure. Empirical results on five benchmarks show that our approach is as good as the existing state-of-the-art KPG systems, but using a much simpler and easy to deploy framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Md Faisal Mahbub Chowdhury (11 papers)
  2. Gaetano Rossiello (21 papers)
  3. Michael Glass (21 papers)
  4. Nandana Mihindukulasooriya (26 papers)
  5. Alfio Gliozzo (28 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.