Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Joint Learning Approach based on Self-Distillation for Keyphrase Extraction from Scientific Documents (2010.11980v1)

Published 22 Oct 2020 in cs.CL and cs.LG

Abstract: Keyphrase extraction is the task of extracting a small set of phrases that best describe a document. Most existing benchmark datasets for the task typically have limited numbers of annotated documents, making it challenging to train increasingly complex neural networks. In contrast, digital libraries store millions of scientific articles online, covering a wide range of topics. While a significant portion of these articles contain keyphrases provided by their authors, most other articles lack such kind of annotations. Therefore, to effectively utilize these large amounts of unlabeled articles, we propose a simple and efficient joint learning approach based on the idea of self-distillation. Experimental results show that our approach consistently improves the performance of baseline models for keyphrase extraction. Furthermore, our best models outperform previous methods for the task, achieving new state-of-the-art results on two public benchmarks: Inspec and SemEval-2017.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tuan Manh Lai (8 papers)
  2. Trung Bui (79 papers)
  3. Doo Soon Kim (20 papers)
  4. Quan Hung Tran (20 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.