Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HydaLearn: Highly Dynamic Task Weighting for Multi-task Learning with Auxiliary Tasks (2008.11643v1)

Published 26 Aug 2020 in cs.LG and stat.ML

Abstract: Multi-task learning (MTL) can improve performance on a task by sharing representations with one or more related auxiliary-tasks. Usually, MTL-networks are trained on a composite loss function formed by a constant weighted combination of the separate task losses. In practice, constant loss weights lead to poor results for two reasons: (i) the relevance of the auxiliary tasks can gradually drift throughout the learning process; (ii) for mini-batch based optimisation, the optimal task weights vary significantly from one update to the next depending on mini-batch sample composition. We introduce HydaLearn, an intelligent weighting algorithm that connects main-task gain to the individual task gradients, in order to inform dynamic loss weighting at the mini-batch level, addressing i and ii. Using HydaLearn, we report performance increases on synthetic data, as well as on two supervised learning domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sam Verboven (9 papers)
  2. Muhammad Hafeez Chaudhary (1 paper)
  3. Jeroen Berrevoets (18 papers)
  4. Wouter Verbeke (32 papers)
Citations (5)