Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representing Mixtures of Word Embeddings with Mixtures of Topic Embeddings (2203.01570v2)

Published 3 Mar 2022 in cs.LG, stat.ME, and stat.ML

Abstract: A topic model is often formulated as a generative model that explains how each word of a document is generated given a set of topics and document-specific topic proportions. It is focused on capturing the word co-occurrences in a document and hence often suffers from poor performance in analyzing short documents. In addition, its parameter estimation often relies on approximate posterior inference that is either not scalable or suffers from large approximation error. This paper introduces a new topic-modeling framework where each document is viewed as a set of word embedding vectors and each topic is modeled as an embedding vector in the same embedding space. Embedding the words and topics in the same vector space, we define a method to measure the semantic difference between the embedding vectors of the words of a document and these of the topics, and optimize the topic embeddings to minimize the expected difference over all documents. Experiments on text analysis demonstrate that the proposed method, which is amenable to mini-batch stochastic gradient descent based optimization and hence scalable to big corpora, provides competitive performance in discovering more coherent and diverse topics and extracting better document representations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Dongsheng Wang (47 papers)
  2. Dandan Guo (19 papers)
  3. He Zhao (117 papers)
  4. Huangjie Zheng (34 papers)
  5. Korawat Tanwisuth (7 papers)
  6. Bo Chen (309 papers)
  7. Mingyuan Zhou (161 papers)
Citations (35)

Summary

We haven't generated a summary for this paper yet.