Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Capturing Global Informativeness in Open Domain Keyphrase Extraction (2004.13639v2)

Published 28 Apr 2020 in cs.CL

Abstract: Open-domain KeyPhrase Extraction (KPE) aims to extract keyphrases from documents without domain or quality restrictions, e.g., web pages with variant domains and qualities. Recently, neural methods have shown promising results in many KPE tasks due to their powerful capacity for modeling contextual semantics of the given documents. However, we empirically show that most neural KPE methods prefer to extract keyphrases with good phraseness, such as short and entity-style n-grams, instead of globally informative keyphrases from open-domain documents. This paper presents JointKPE, an open-domain KPE architecture built on pre-trained LLMs, which can capture both local phraseness and global informativeness when extracting keyphrases. JointKPE learns to rank keyphrases by estimating their informativeness in the entire document and is jointly trained on the keyphrase chunking task to guarantee the phraseness of keyphrase candidates. Experiments on two large KPE datasets with diverse domains, OpenKP and KP20k, demonstrate the effectiveness of JointKPE on different pre-trained variants in open-domain scenarios. Further analyses reveal the significant advantages of JointKPE in predicting long and non-entity keyphrases, which are challenging for previous neural KPE methods. Our code is publicly available at https://github.com/thunlp/BERT-KPE.

An Analysis of "Capturing Global Informativeness in Open Domain Keyphrase Extraction"

This paper addresses a prevalent challenge in NLP, particularly within the task of KeyPhrase Extraction (KPE) in open-domain scenarios. Traditional neural KPE methods have shown limitations in prioritizing global informativeness over local phraseness. The authors propose a model named JointKPE which leverages pre-trained LLMs to enhance KPE tasks by integrating both local semantic coherence and global informativeness.

Key Contributions

JointKPE introduces an innovative architecture that concurrently targets two critical aspects of keyphrase quality: phraseness and informativeness. While existing neural models often emphasize local context, this model adopts a multi-task learning framework to balance these factors. The paper highlights the ability of JointKPE to handle diverse domains, as showcased by experiments conducted on the OpenKP and KP20k datasets.

Methodology

The JointKPE framework utilizes pre-trained LLMs, like BERT, to generate contextual word embeddings. It employs Convolutional Neural Networks (CNNs) to construct n-gram representations, ensuring that keyphrase candidates retain contextual richness. Importantly, JointKPE estimates the global informativeness of phrases that may occur in varied contexts within a document, using a ranking mechanism facilitated by localized informativeness scores. This is operationalized through the informative ranking loss and keyphrase chunking task, aligning training objectives to promote both phraseness and global document relevance.

Experimental Results

Experiments indicate that JointKPE outperforms both traditional and neural KPE baselines, including state-of-the-art models such as BLING-KPE and CDKGEN. The model successfully improves precision and recall, particularly excelling in extracting long and non-entity keyphrases—areas where previous methods struggled. These results are bolstered by analyzing its behavior on both open-domain and domain-specific datasets, demonstrating robustness across different pre-trained model variants like SpanBERT and RoBERTa.

Implications and Future Work

The implications of JointKPE are significant for advancing open-domain KPE methodologies. By effectively integrating global informativeness into keyphrase extraction, it bridges a crucial gap left by existing approaches that overly focus on local phraseness. This capability potentially enhances downstream NLP tasks such as document summarization and information retrieval, where capturing a broader informational context is critical.

Future research could explore the integration of additional contextual features beyond text, such as multi-modal data from web documents, to further refine informativeness estimates. Additionally, expanding this approach to other languages and domains could provide valuable insights into the model's adaptability and efficacy.

In summary, the paper contributes a methodologically sound and empirically validated approach to enhancing KPE tasks in open-domain scenarios. JointKPE's architecture, focusing on both localized semantic integrity and broader document informativeness, represents a noteworthy advancement in the field of information extraction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Si Sun (9 papers)
  2. Zhenghao Liu (77 papers)
  3. Chenyan Xiong (95 papers)
  4. Zhiyuan Liu (433 papers)
  5. Jie Bao (40 papers)
Citations (28)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub