Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ClickPrompt: CTR Models are Strong Prompt Generators for Adapting Language Models to CTR Prediction (2310.09234v5)

Published 13 Oct 2023 in cs.IR and cs.AI

Abstract: Click-through rate (CTR) prediction has become increasingly indispensable for various Internet applications. Traditional CTR models convert the multi-field categorical data into ID features via one-hot encoding, and extract the collaborative signals among features. Such a paradigm suffers from the problem of semantic information loss. Another line of research explores the potential of pretrained LLMs (PLMs) for CTR prediction by converting input data into textual sentences through hard prompt templates. Although semantic signals are preserved, they generally fail to capture the collaborative information (e.g., feature interactions, pure ID features), not to mention the unacceptable inference overhead brought by the huge model size. In this paper, we aim to model both the semantic knowledge and collaborative knowledge for accurate CTR estimation, and meanwhile address the inference inefficiency issue. To benefit from both worlds and close their gaps, we propose a novel model-agnostic framework (i.e., ClickPrompt), where we incorporate CTR models to generate interaction-aware soft prompts for PLMs. We design a prompt-augmented masked LLMing (PA-MLM) pretraining task, where PLM has to recover the masked tokens based on the language context, as well as the soft prompts generated by CTR model. The collaborative and semantic knowledge from ID and textual features would be explicitly aligned and interacted via the prompt interface. Then, we can either tune the CTR model with PLM for superior performance, or solely tune the CTR model without PLM for inference efficiency. Experiments on four real-world datasets validate the effectiveness of ClickPrompt compared with existing baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jianghao Lin (47 papers)
  2. Bo Chen (309 papers)
  3. Hangyu Wang (6 papers)
  4. Yunjia Xi (21 papers)
  5. Yanru Qu (19 papers)
  6. Xinyi Dai (32 papers)
  7. Kangning Zhang (7 papers)
  8. Ruiming Tang (171 papers)
  9. Yong Yu (219 papers)
  10. Weinan Zhang (322 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.