Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdaTT: Adaptive Task-to-Task Fusion Network for Multitask Learning in Recommendations (2304.04959v2)

Published 11 Apr 2023 in cs.IR

Abstract: Multi-task learning (MTL) aims to enhance the performance and efficiency of machine learning models by simultaneously training them on multiple tasks. However, MTL research faces two challenges: 1) effectively modeling the relationships between tasks to enable knowledge sharing, and 2) jointly learning task-specific and shared knowledge. In this paper, we present a novel model called Adaptive Task-to-Task Fusion Network (AdaTT) to address both challenges. AdaTT is a deep fusion network built with task-specific and optional shared fusion units at multiple levels. By leveraging a residual mechanism and a gating mechanism for task-to-task fusion, these units adaptively learn both shared knowledge and task-specific knowledge. To evaluate AdaTT's performance, we conduct experiments on a public benchmark and an industrial recommendation dataset using various task groups. Results demonstrate AdaTT significantly outperforms existing state-of-the-art baselines. Furthermore, our end-to-end experiments reveal that the model exhibits better performance compared to alternatives.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Danwei Li (2 papers)
  2. Zhengyu Zhang (26 papers)
  3. Siyang Yuan (9 papers)
  4. Mingze Gao (7 papers)
  5. Weilin Zhang (10 papers)
  6. Chaofei Yang (5 papers)
  7. Xi Liu (83 papers)
  8. Jiyan Yang (32 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.