Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Task Grouping and Overlap in Multi-task Learning (1206.6417v1)

Published 27 Jun 2012 in cs.LG and stat.ML

Abstract: In the paradigm of multi-task learning, mul- tiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learn- ing that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combi- nation of a finite number of underlying basis tasks. The coefficients of the linear combina- tion are sparse in nature and the overlap in the sparsity patterns of two tasks controls the amount of sharing across these. Our model is based on on the assumption that task pa- rameters within a group lie in a low dimen- sional subspace but allows the tasks in differ- ent groups to overlap with each other in one or more bases. Experimental results on four datasets show that our approach outperforms competing methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Abhishek Kumar (172 papers)
  2. Hal Daume III (164 papers)
Citations (517)

Summary

Overview of Learning Task Grouping and Overlap in Multi-Task Learning

This paper introduces a novel approach for multi-task learning (MTL) that addresses the challenges of task grouping and information sharing. The authors propose a framework allowing selective information sharing between tasks, based on their hypothesis that each task parameter vector can be expressed as a sparse linear combination of a finite number of underlying basis tasks. This method navigates the complexities of task relatedness by controlling the extent of sharing through the overlap in sparsity patterns across tasks.

Key Contributions

The central contribution of this work is a structured prior on the task weight matrix, which governs the parameters of individual prediction tasks. The model allows formation of groups of related tasks with partial overlap, lending itself to an adaptable sharing structure. Importantly, tasks can exhibit full, partial, or no overlap, contingent on the number of basis tasks they share. This contrasts with conventional methods that assume either complete task relatedness or requirement of disjoint groups.

Theoretical Framework

The proposed model builds upon the assumption that task parameters within a group lie in a low-dimensional subspace, while accommodating overlap between different groups. It introduces latent basis tasks, where each observed task is a linear combination of these bases. The overlap in sparsity patterns of any two tasks controls shared information, preventing negative transfer from unrelated tasks while allowing beneficial interactions between related tasks.

Methodology

The approach employs an alternating optimization strategy, leveraging a trace-norm constraint to maintain a low-dimensional hypothesis space. This balances the number of shared bases and allows effective learning even in the presence of noise or irrelevant features. The method was empirically validated using both synthetic and real-world datasets, demonstrating superior performance compared to existing frameworks such as disjoint-group MTL and no-group MTL.

Optimization

For regression tasks, the optimization utilizes a squared loss function and is harnessed through a two-metric projection method. For classification tasks, logistic regression is deployed with optimization via Newton-Raphson or gradient descent methods, depending on problem scale and convergence needs.

Results

The empirical results on both synthetic datasets—one with disjoint groups and another with overlapping groups—highlight the robustness of this model in effectively learning the underlying task structures. On real datasets spanning regression and classification problems, the method consistently outperformed baseline multi-task and single-task learning paradigms.

Implications and Future Directions

The implications of this research are substantial for efficiently managing and utilizing task relatedness in MTL frameworks. By introducing a mechanism for task overlap, this model offers a more nuanced approach to learning complex inter-task relationships. Future avenues could explore extensions involving hierarchies or more complex interaction patterns among tasks, potentially enhancing applicability across diverse domains and larger task pools.

The research provides a significant step towards refined task-relatedness modeling in multi-task environments, potentially influencing subsequent developments in machine learning methodologies. The scalability and adaptability of this approach make it a noteworthy addition to the multi-task learning landscape.