Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Label Continual Learning for the Medical Domain: A Novel Benchmark (2404.06859v3)

Published 10 Apr 2024 in cs.CV and cs.AI

Abstract: Despite the critical importance of the medical domain in Deep Learning, most of the research in this area solely focuses on training models in static environments. It is only in recent years that research has begun to address dynamic environments and tackle the Catastrophic Forgetting problem through Continual Learning (CL) techniques. Previous studies have primarily focused on scenarios such as Domain Incremental Learning and Class Incremental Learning, which do not fully capture the complexity of real-world applications. Therefore, in this work, we propose a novel benchmark combining the challenges of new class arrivals and domain shifts in a single framework, by considering the New Instances and New Classes (NIC) scenario. This benchmark aims to model a realistic CL setting for the multi-label classification problem in medical imaging. Additionally, it encompasses a greater number of tasks compared to previously tested scenarios. Specifically, our benchmark consists of two datasets (NIH and CXP), nineteen classes, and seven tasks, a stream longer than the previously tested ones. To solve common challenges (e.g., the task inference problem) found in the CIL and NIC scenarios, we propose a novel approach called Replay Consolidation with Label Propagation (RCLP). Our method surpasses existing approaches, exhibiting superior performance with minimal forgetting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Marina Ceccon (6 papers)
  2. Davide Dalle Pezze (18 papers)
  3. Alessandro Fabris (15 papers)
  4. Gian Antonio Susto (58 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.