Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
55 tokens/sec
2000 character limit reached

Res-Attn : An Enhanced Res-Tuning Approach with Lightweight Attention Mechanism (2312.16916v1)

Published 28 Dec 2023 in cs.CV

Abstract: Res-Tuning introduces a flexible and efficient paradigm for model tuning, showing that tuners decoupled from the backbone network can achieve performance comparable to traditional methods. Existing methods commonly construct the tuner as a set of trainable low-rank decomposition matrices, positing that a low-rank subspace suffices for adapting pre-trained foundational models to new scenarios. In this work, we present an advanced, efficient tuner augmented with low-rank attention, termed Res-Attn , which also adheres to the Res-Tuning framework. Res-Attn utilizes a parallel multi-head attention module equipped with low-rank projections for query, key, and value to execute streamlined attention operations. Through training this lightweight attention module, Res-Attn facilitates adaptation to new scenarios. Our extensive experiments across a range of discriminative and generative tasks showcase the superior performance of our method when compared to existing alternatives

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.