Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incremental user embedding modeling for personalized text classification (2202.06369v1)

Published 13 Feb 2022 in cs.LG, cs.CL, eess.AS, and eess.SP

Abstract: Individual user profiles and interaction histories play a significant role in providing customized experiences in real-world applications such as chatbots, social media, retail, and education. Adaptive user representation learning by utilizing user personalized information has become increasingly challenging due to ever-growing history data. In this work, we propose an incremental user embedding modeling approach, in which embeddings of user's recent interaction histories are dynamically integrated into the accumulated history vectors via a transformer encoder. This modeling paradigm allows us to create generalized user representations in a consecutive manner and also alleviate the challenges of data management. We demonstrate the effectiveness of this approach by applying it to a personalized multi-class classification task based on the Reddit dataset, and achieve 9% and 30% relative improvement on prediction accuracy over a baseline system for two experiment settings through appropriate comment history encoding and task modeling.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ruixue Lian (2 papers)
  2. Che-Wei Huang (8 papers)
  3. Yuqing Tang (12 papers)
  4. Qilong Gu (8 papers)
  5. Chengyuan Ma (20 papers)
  6. Chenlei Guo (17 papers)
Citations (4)