Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis (2401.16348v2)

Published 29 Jan 2024 in cs.CL, cs.CY, and cs.HC

Abstract: Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models (NTMs) and can overlook a models benefits in real world applications. To this end, we conduct the first evaluation of neural, supervised and classical topic models in an interactive task based setting. We combine topic models with a classifier and test their ability to help humans conduct content analysis and document annotation. From simulated, real user and expert pilot studies, the Contextual Neural Topic Model does the best on cluster evaluation metrics and human evaluations; however, LDA is competitive with two other NTMs under our simulated experiment and user study results, contrary to what coherence scores suggest. We show that current automated metrics do not provide a complete picture of topic modeling capabilities, but the right choice of NTMs can be better than classical models on practical task.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zongxia Li (14 papers)
  2. Andrew Mao (10 papers)
  3. Daniel Stephens (1 paper)
  4. Pranav Goel (10 papers)
  5. Emily Walpole (1 paper)
  6. Alden Dima (5 papers)
  7. Juan Fung (1 paper)
  8. Jordan Boyd-Graber (68 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com