Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Rich Representation of Keyphrases from Text (2112.08547v2)

Published 16 Dec 2021 in cs.CL, cs.IR, and cs.LG

Abstract: In this work, we explore how to train task-specific LLMs aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer LLMs (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained LLMs on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mayank Kulkarni (7 papers)
  2. Debanjan Mahata (25 papers)
  3. Ravneet Arora (2 papers)
  4. Rajarshi Bhowmik (7 papers)
Citations (60)