Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoSUM: Automating Feature Extraction and Multi-user Preference Simulation for Entity Summarization (2005.11888v1)

Published 25 May 2020 in cs.IR and cs.CL

Abstract: Withthegrowthofknowledgegraphs, entity descriptions are becoming extremely lengthy. Entity summarization task, aiming to generate diverse, comprehensive, and representative summaries for entities, has received increasing interest recently. In most previous methods, features are usually extracted by the handcrafted templates. Then the feature selection and multi-user preference simulation take place, depending too much on human expertise. In this paper, a novel integration method called AutoSUM is proposed for automatic feature extraction and multi-user preference simulation to overcome the drawbacks of previous methods. There are two modules in AutoSUM: extractor and simulator. The extractor module operates automatic feature extraction based on a BiLSTM with a combined input representation including word embeddings and graph embeddings. Meanwhile, the simulator module automates multi-user preference simulation based on a well-designed two-phase attention mechanism (i.e., entity-phase attention and user-phase attention). Experimental results demonstrate that AutoSUM produces state-of-the-art performance on two widely used datasets (i.e., DBpedia and LinkedMDB) in both F-measure and MAP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dongjun Wei (6 papers)
  2. Yaxin Liu (17 papers)
  3. Fuqing Zhu (8 papers)
  4. Liangjun Zang (10 papers)
  5. Wei Zhou (311 papers)
  6. Yijun Lu (3 papers)
  7. Songlin Hu (80 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.