Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Design Perspectives of Multitask Deep Learning Models and Applications (2209.13444v1)

Published 27 Sep 2022 in cs.LG, cs.AI, and cs.CV

Abstract: In recent years, multi-task learning has turned out to be of great success in various applications. Though single model training has promised great results throughout these years, it ignores valuable information that might help us estimate a metric better. Under learning-related tasks, multi-task learning has been able to generalize the models even better. We try to enhance the feature mapping of the multi-tasking models by sharing features among related tasks and inductive transfer learning. Also, our interest is in learning the task relationships among various tasks for acquiring better benefits from multi-task learning. In this chapter, our objective is to visualize the existing multi-tasking models, compare their performances, the methods used to evaluate the performance of the multi-tasking models, discuss the problems faced during the design and implementation of these models in various domains, and the advantages and milestones achieved by them

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yeshwant Singh (1 paper)
  2. Anupam Biswas (11 papers)
  3. Angshuman Bora (1 paper)
  4. Debashish Malakar (2 papers)
  5. Subham Chakraborty (5 papers)
  6. Suman Bera (1 paper)

Summary

We haven't generated a summary for this paper yet.