Peacock: Learning Long-Tail Topic Features for Industrial Applications (1405.4402v3)
Abstract: Latent Dirichlet allocation (LDA) is a popular topic modeling technique in academia but less so in industry, especially in large-scale applications involving search engine and online advertising systems. A main underlying reason is that the topic models used have been too small in scale to be useful; for example, some of the largest LDA models reported in literature have up to $103$ topics, which cover difficultly the long-tail semantic word sets. In this paper, we show that the number of topics is a key factor that can significantly boost the utility of topic-modeling systems. In particular, we show that a "big" LDA model with at least $105$ topics inferred from $109$ search queries can achieve a significant improvement on industrial search engine and online advertising systems, both of which serving hundreds of millions of users. We develop a novel distributed system called Peacock to learn big LDA models from big data. The main features of Peacock include hierarchical distributed architecture, real-time prediction and topic de-duplication. We empirically demonstrate that the Peacock system is capable of providing significant benefits via highly scalable LDA topic models for several industrial applications.
- Yi Wang (1038 papers)
- Xuemin Zhao (8 papers)
- Zhenlong Sun (4 papers)
- Hao Yan (109 papers)
- Lifeng Wang (54 papers)
- Zhihui Jin (1 paper)
- Liubin Wang (1 paper)
- Yang Gao (761 papers)
- Ching Law (2 papers)
- Jia Zeng (45 papers)