Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion (2111.11326v3)

Published 22 Nov 2021 in cs.CV and cs.LG

Abstract: Deep network architectures struggle to continually learn new tasks without forgetting the previous tasks. A recent trend indicates that dynamic architectures based on an expansion of the parameters can reduce catastrophic forgetting efficiently in continual learning. However, existing approaches often require a task identifier at test-time, need complex tuning to balance the growing number of parameters, and barely share any information across tasks. As a result, they struggle to scale to a large number of tasks without significant overhead. In this paper, we propose a transformer architecture based on a dedicated encoder/decoder framework. Critically, the encoder and decoder are shared among all tasks. Through a dynamic expansion of special tokens, we specialize each forward of our decoder network on a task distribution. Our strategy scales to a large number of tasks while having negligible memory and time overheads due to strict control of the parameters expansion. Moreover, this efficient strategy doesn't need any hyperparameter tuning to control the network's expansion. Our model reaches excellent results on CIFAR100 and state-of-the-art performances on the large-scale ImageNet100 and ImageNet1000 while having less parameters than concurrent dynamic frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Arthur Douillard (20 papers)
  2. Alexandre Ramé (23 papers)
  3. Guillaume Couairon (17 papers)
  4. Matthieu Cord (129 papers)
Citations (247)

Summary

DyTox: Transformers for Continual Learning with Dynamic Token Expansion

DyTox, a novel methodology utilizing transformers for continual learning, introduces a dynamic token expansion mechanism tailored for task-specific conditioning. This paper presents a comprehensive evaluation of DyTox against established baselines, highlighting its effective architectural innovations and performance metrics. The method is centered on leveraging transformer-based attention blocks (TAB) rather than conventional affine modulation techniques, distinguishing it from existing VQA architectures such as FiLM.

A key aspect of DyTox's design is the use of dynamic token expansion. This pertains to its ability to adaptively modify a task token query via visual features, thereby enhancing the model's capacity to handle multiple tasks over time. Initial attempts to employ a ResNet backbone demonstrated superior performance over most baselines but were eclipsed by the full DyTox framework, which capitalizes on an end-to-end transformer approach. The DyTox architecture's efficiency is further underscored by its operational speed, comparable to a ResNet18, with only a minor time overhead of 2.24% per task.

The paper delineates the efficacy of DyTox's rehearsal protocol, inspired by iCaRL, which provides a systematic approach to sampling and integrating rehearsal data. The integration of MixUp with DyTox is showcased as a pivotal enhancement, particularly within transformer models, culminating in state-of-the-art results on ImageNet and competitive performance on CIFAR100.

Despite DyTox's proven competencies, the paper acknowledges the limitations inherent in current benchmarks that may not fully encapsulate realistic scenarios with non-mutually exclusive tasks. In response, potential extensions are proposed, including handling unbalanced data distributions and incorporating insights from related work such as "Learning to Segment the Tail" and the CORE50 scenarios.

The wider implications of DyTox suggest a significant impact on the development of memory-efficient models capable of robust continual learning across a diverse range of applications. Its design principle of minimizing hyperparameter dependencies aligns with the objective of achieving strong baseline performance across varying task settings without bespoke tuning. Looking forward, the research may spur further advancements in transformer-based continual learning frameworks, with the potential to address increasingly complex real-world deployment scenarios. Overall, DyTox represents a noteworthy stride in leveraging dynamic transformer architectures for scalable and efficient continual learning.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com