Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Topic Embedding: a Continuous Representation of Documents (Extended Version with Proofs) (1606.02979v2)

Published 9 Jun 2016 in cs.CL, cs.AI, cs.IR, cs.LG, and stat.ML

Abstract: Word embedding maps words into a low-dimensional continuous embedding space by exploiting the local word collocation patterns in a small context window. On the other hand, topic modeling maps documents onto a low-dimensional topic space, by utilizing the global word collocation patterns in the same document. These two types of patterns are complementary. In this paper, we propose a generative topic embedding model to combine the two types of patterns. In our model, topics are represented by embedding vectors, and are shared across documents. The probability of each word is influenced by both its local context and its topic. A variational inference method yields the topic embeddings as well as the topic mixing proportions for each document. Jointly they represent the document in a low-dimensional continuous space. In two document classification tasks, our method performs better than eight existing methods, with fewer features. In addition, we illustrate with an example that our method can generate coherent topics even based on only one document.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shaohua Li (43 papers)
  2. Tat-Seng Chua (360 papers)
  3. Jun Zhu (424 papers)
  4. Chunyan Miao (145 papers)
Citations (106)

Summary

We haven't generated a summary for this paper yet.