Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DCMT: A Direct Entire-Space Causal Multi-Task Framework for Post-Click Conversion Estimation (2302.06141v1)

Published 13 Feb 2023 in cs.IR

Abstract: In recommendation scenarios, there are two long-standing challenges, i.e., selection bias and data sparsity, which lead to a significant drop in prediction accuracy for both Click-Through Rate (CTR) and post-click Conversion Rate (CVR) tasks. To cope with these issues, existing works emphasize on leveraging Multi-Task Learning (MTL) frameworks (Category 1) or causal debiasing frameworks (Category 2) to incorporate more auxiliary data in the entire exposure/inference space D or debias the selection bias in the click/training space O. However, these two kinds of solutions cannot effectively address the not-missing-at-random problem and debias the selection bias in O to fit the inference in D. To fill the research gaps, we propose a Direct entire-space Causal Multi-Task framework, namely DCMT, for post-click conversion prediction in this paper. Specifically, inspired by users' decision process of conversion, we propose a new counterfactual mechanism to debias the selection bias in D, which can predict the factual CVR and the counterfactual CVR under the soft constraint of a counterfactual prior knowledge. Extensive experiments demonstrate that our DCMT can improve the state-of-the-art methods by an average of 1.07% in terms of CVR AUC on the five offline datasets and 0.75% in terms of PV-CVR on the online A/B test (the Alipay Search). Such improvements can increase millions of conversions per week in real industrial applications, e.g., the Alipay Search.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Feng Zhu (140 papers)
  2. Mingjie Zhong (5 papers)
  3. Xinxing Yang (14 papers)
  4. Longfei Li (45 papers)
  5. Lu Yu (87 papers)
  6. Tiehua Zhang (27 papers)
  7. Jun Zhou (370 papers)
  8. Chaochao Chen (87 papers)
  9. Fei Wu (317 papers)
  10. Guanfeng Liu (28 papers)
  11. Yan Wang (734 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.