Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
131 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Multi-task Representation Learning: A Tensor Factorisation Approach (1605.06391v2)

Published 20 May 2016 in cs.LG

Abstract: Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework that learns cross-task sharing structure at every layer in a deep network. Our approach is based on generalising the matrix factorisation techniques explicitly or implicitly used by many conventional MTL algorithms to tensor factorisation, to realise automatic learning of end-to-end knowledge sharing in deep networks. This is in contrast to existing deep learning approaches that need a user-defined multi-task sharing strategy. Our approach applies to both homogeneous and heterogeneous MTL. Experiments demonstrate the efficacy of our deep multi-task representation learning in terms of both higher accuracy and fewer design choices.

Citations (243)

Summary

  • The paper introduces a tensor factorisation method that enables deep neural networks to share representations across multiple tasks.
  • It uses Tucker and Tensor Train decompositions to automatically determine optimal parameter sharing without manual design.
  • Empirical results on tasks like MNIST and facial recognition show improved accuracy and reduced complexity in multi-task learning architectures.

Deep Multi-Task Representation Learning: A Tensor Factorisation Approach

In this paper, Yang and Hospedales propose a novel method to address multi-task learning (MTL) in deep neural networks (DNNs) through the application of tensor factorisation. Their work aims to overcome the limitations of traditional linear model-based MTL by facilitating automatic sharing of the representation structure across tasks in a deep learning context. The authors leverage tensor factorisation to naturally extend matrix factorisation techniques, thus enhancing conventional MTL algorithms, which require manually defined task-sharing strategies.

Key Contributions

  1. Tensor Factorisation for Deep MTL: The authors introduce a method for sharing structure at every layer of a DNN by using tensor factorisation. This approach allows for both homogeneous and heterogeneous MTL settings, where tasks may be similar or distinct, respectively. Tensor factorisation serves as a generalisation of shallow matrix-based MTL methods, thereby broadening the potential of knowledge sharing at multiple network layers.
  2. Automatic Knowledge Sharing: By implementing tensor decompositions—specifically Tucker and Tensor Train decomposition—the approach offers a systematic way to discover where and how much to share parameters across tasks. This alleviates the burden of user-defined sharing structures, which are common in existing deep MTL solutions.
  3. Practical Efficacy and Design Simplification: The proposed method not only enhances the accuracy of deep learning models but also reduces the complexity involved in designing DNN architectures. The authors highlight how their approach mitigates the issues related to selecting parameters manually for shared and task-specific layers, a critical advancement in DNN design efficiency.

Methodology and Experiments

The research applies tensor factorisation techniques in creating shared and task-specific components of DNN layers. The authors develop their framework using TensorFlow and demonstrate its effectiveness across various MTL benchmark tasks, such as digit recognition with MNIST and heterogeneous learning tasks like facial attribute classification and multi-alphabet recognition.

  • MNIST Task: For binary classification (one-vs-all), the proposed DMTRL methods outperform traditional STL and user-defined MTL approaches. Notably, DMTRL-Tensor Train (TT) achieves the lowest error rates, showcasing the importance of end-to-end deep multi-task representation learning.
  • Facial Attribute and Multi-Alphabet Recognition: In the challenging domain of heterogeneous MTL, the proposed DMTRL methods again surpass baseline methods, with DMTRL-Tucker consistently demonstrating superior performance over both STL and user-defined MTL for gender and age classification on the AdienceFaces dataset. Similarly, in the Omniglot experiment, DMTRL methods deliver significant improvements across varied alphabets.

Implications and Future Directions

The findings stress the potential for tensor factorisation to dynamically determine optimal sharing structures within deep networks, thus enhancing both multi-task performance and architectural efficiency. Future developments could further explore automated tuning mechanisms within tensor decompositions to optimize rank selection and extend applicability to complex, real-world multi-task scenarios. Furthermore, expanding this approach to model other structural variations in data beyond task parallelism can open new avenues for representation learning within AI systems.

By effectively reducing the challenges associated with designing MTL architectures, this paper represents a significant contribution to the development of adaptive representation learning methodologies. It provides a robust foundation for researchers to build upon in enhancing the generalisability and scalability of MTL systems in both homogeneous and heterogeneous task domains.