Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Contextualized Topic Models with Negative Sampling (2303.14951v1)

Published 27 Mar 2023 in cs.CL and cs.LG

Abstract: Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized LLMs and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Suman Adhya (6 papers)
  2. Avishek Lahiri (4 papers)
  3. Debarshi Kumar Sanyal (21 papers)
  4. Partha Pratim Das (22 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.