Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Mutual Learning across Task Towers for Effective Multi-Task Recommender Learning (2309.10357v1)

Published 19 Sep 2023 in cs.IR

Abstract: Recommender systems usually leverage multi-task learning methods to simultaneously optimize several objectives because of the multi-faceted user behavior data. The typical way of conducting multi-task learning is to establish appropriate parameter sharing across multiple tasks at lower layers while reserving a separate task tower for each task at upper layers. Since the task towers exert direct impact on the prediction results, we argue that the architecture of standalone task towers is sub-optimal for promoting positive knowledge sharing. Accordingly, we propose the framework of Deep Mutual Learning across task towers, which is compatible with various backbone multi-task networks. Extensive offline experiments and online AB tests are conducted to evaluate and verify the proposed approach's effectiveness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yi Ren (215 papers)
  2. Ying Du (10 papers)
  3. Bin Wang (750 papers)
  4. Shenzheng Zhang (4 papers)

Summary

We haven't generated a summary for this paper yet.