Joint-Task Regularization for Partially Labeled Multi-Task Learning (2404.01976v1)
Abstract: Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets. Most multi-task learning methods depend on fully labeled datasets wherein each input example is accompanied by ground-truth labels for all target tasks. Unfortunately, curating such datasets can be prohibitively expensive and impractical, especially for dense prediction tasks which require per-pixel labels for each image. With this in mind, we propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space to improve learning when data is not fully labeled for all tasks. JTR stands out from existing approaches in that it regularizes all tasks jointly rather than separately in pairs -- therefore, it achieves linear complexity relative to the number of tasks while previous methods scale quadratically. To demonstrate the validity of our approach, we extensively benchmark our method across a wide variety of partially labeled scenarios based on NYU-v2, Cityscapes, and Taskonomy.
- Efficient controllable multi-task architectures. In IEEE International Conference on Computer Vision, 2023.
- Segnet: A deep convolutional encoder-decoder architecture for image segmentation. TPAMI, 39(12):2481–2495, 2017.
- Generative modeling for multi-task visual learning. In International Conference on Machine Learning, 2022.
- Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems, NeurIPS, 2019.
- Dejavu: Conditional regenerative learning to enhance dense prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19466–19477, 2023.
- Stochastic filter groups for multi-task cnns: Learning specialist and generalist convolution kernels. In IEEE International Conference on Computer Vision, 2019.
- Automated search for resource-efficient branched multi-task networks. In British Machine Vision Conference, 2020.
- Exploring relational context for multi-task dense prediction. In IEEE International Conference on Computer Vision, 2021.
- Rich Caruana. Multitask learning. Machine learning, 28:41–75, 1997.
- Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Adamv-moe: Adaptive multi-task vision mixture-of-experts. In IEEE International Conference on Computer Vision, 2023a.
- Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, 2018.
- Just pick a sign: Optimizing deep multitask models with gradient sign dropout. In Advances in Neural Information Processing Systems, NeurIPS, 2020a.
- A multi-task mean teacher for semi-supervised shadow detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2020b.
- Mod-squad: Designing mixtures of experts as modular multi-task learners. In IEEE Conference on Computer Vision and Pattern Recognition, 2023b.
- Dynamic neural network for multi-task learning searching across diverse network topologies. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3779–3788, 2023.
- The cityscapes dataset for semantic urban scene understanding. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
- Randaugment: Practical automated data augmentation with a reduced search space. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2020.
- Revisiting consistency regularization for semi-supervised learning. International Journal of Computer Vision, 131(3):626–643, 2023.
- M33{}^{3}start_FLOATSUPERSCRIPT 3 end_FLOATSUPERSCRIPTvit: Mixture-of-experts vision transformer for efficient multi-task learning with model-accelerator co-design. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- Multi-task self-training for learning general representations. In IEEE International Conference on Computer Vision, 2021.
- A comparison of loss weighting strategies for multi task learning in deep neural networks. IEEE Access, 7:141627–141632, 2019.
- Visual Computing (VICO) Group. Mtpsl. https://github.com/VICO-UoE/MTPSL, 2022.
- Semantically-guided representation learning for self-supervised monocular depth. In International Conference on Learning Representations, 2020a.
- Robust semi-supervised monocular depth estimation with reprojected distances. In Conference on robot learning, 2020b.
- Dynamic task prioritization for multitask learning. In European Conference on Computer Vision, 2018.
- Learning to branch for multi-task learning. In International Conference on Machine Learning, 2020.
- Multi-task curriculum learning based on gradient similarity. In British Machine Vision Conference, 2022.
- Consistency-based semi-supervised learning for object detection. In Advances in Neural Information Processing Systems, NeurIPS, 2019.
- Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
- Mitigating negative transfer in multi-task learning with exponential moving average loss weighting strategies (student abstract). In Proceedings of the AAAI Conference on Artificial Intelligence, pages 16246–16247, 2023.
- Label2label: A language modeling framework for multi-attribute learning. In European Conference on Computer Vision, pages 562–579, 2022a.
- Learning multiple dense prediction tasks from partially annotated data. In IEEE Conference on Computer Vision and Pattern Recognition, 2022b.
- Effective adaptation in multi-task co-training for unified autonomous driving. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- Pareto multi-task learning. In Advances in Neural Information Processing Systems, NeurIPS, 2019.
- Single image depth estimation from predicted semantic labels. In IEEE Conference on Computer Vision and Pattern Recognition, 2010.
- Semi-supervised multitask learning. In Advances in Neural Information Processing Systems, NeurIPS, 2007.
- End-to-end multi-task learning with attention. In IEEE Conference on Computer Vision and Pattern Recognition, 2019a.
- Loss-balanced task weighting to reduce negative transfer in multi-task learning. In Proceedings of the AAAI conference on artificial intelligence, pages 9977–9978, 2019b.
- Polyhistor: Parameter-efficient multi-task adaptation for dense vision tasks. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
- Taskology: Utilizing task relations at scale. In IEEE Conference on Computer Vision and Pattern Recognition, 2021.
- Semi-supervised segmentation based on error-correcting supervision. In European Conference on Computer Vision, 2020.
- Cross-stitch networks for multi-task learning. In CVPR, 2016.
- A multi-objective/multi-task learning framework induced by pareto stationarity. In International Conference on Machine Learning, 2022.
- Multi-task learning as a bargaining game. In International Conference on Machine Learning, 2022.
- Classmix: Segmentation-based data augmentation for semi-supervised learning. In IEEE Winter Conf. on Applications of Computer Vision (WACV), 2021.
- Semi-supervised semantic segmentation with cross-consistency training. In IEEE Conference on Computer Vision and Pattern Recognition, 2020.
- Visual domain adaptation: A survey of recent advances. SPM, 32(3):53–69, 2015.
- Film: Visual reasoning with a general conditioning layer. In Proceedings of the AAAI conference on artificial intelligence, 2018.
- Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
- Latent multi-task architecture learning. In AAAI, 2019.
- Learning to relate depth and semantics for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition, 2021.
- Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems, NeurIPS, 2018.
- Independent component alignment for multi-task learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2023.
- Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, 2012.
- Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems, NeurIPS, 2020.
- Which tasks should be learned together in multi-task learning? In International Conference on Machine Learning, 2020.
- Branched multi-task networks: Deciding what layers to share. In British Machine Vision Conference, 2020a.
- Mti-net: Multi-scale task interaction networks for multi-task learning. In European Conference on Computer Vision, 2020b.
- Multi-task learning for dense prediction tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7):3614–3633, 2021.
- Neural taskonomy: Inferring the similarity of task-derived representations from brain activity. In Advances in Neural Information Processing Systems, NeurIPS, 2019.
- Semi-supervised multi-task learning with task regularizations. In IEEE International Conference on Data Mining, 2009.
- Semi-supervised multi-task learning for semantics and depth. In IEEE Winter Conf. on Applications of Computer Vision (WACV), 2022.
- FastDepth: Fast Monocular Depth Estimation on Embedded Systems. In IEEE International Conference on Robotics and Automation, 2019.
- Do current multi-task optimization methods in deep learning even help? In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
- Mtformer: Multi-task learning via transformer and cross-task reasoning. In European Conference on Computer Vision, 2022.
- Inverted pyramid multi-task transformer for dense scene understanding. In European Conference on Computer Vision, 2022.
- Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6):2140, 2021.
- Gradient surgery for multi-task learning. In Advances in Neural Information Processing Systems, NeurIPS, 2020.
- Generic 3d representation via pose estimation and matching. In European Conference on Computer Vision, 2016.
- Taskonomy: Disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
- Robust learning through cross-task consistency. In IEEE Conference on Computer Vision and Pattern Recognition, 2020.
- Automtl: A programming framework for automating efficient multi-task learning. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- Semi-supervised multi-task regression. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD, 2009.
- Joint task-recursive learning for semantic segmentation and depth estimation. In European Conference on Computer Vision, 2018.
- Pattern-affinitive propagation across depth, surface normal and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
- Pattern-structure diffusion for multi-task learning. In IEEE Conference on Computer Vision and Pattern Recognition, 2020.
- Kento Nishi (5 papers)
- Junsik Kim (36 papers)
- Wanhua Li (29 papers)
- Hanspeter Pfister (131 papers)