Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple Modalities (2401.11143v4)
Abstract: We propose the Multi-Head Density Adaptive Attention Mechanism (DAAM), a novel probabilistic attention framework that can be used for Parameter-Efficient Fine-tuning (PEFT), and the Density Adaptive Transformer (DAT), designed to enhance information aggregation across multiple modalities, including Speech, Text, and Vision. DAAM integrates learnable mean and variance into its attention mechanism, implemented in a multi-head framework, enabling it to collectively model any probability distribution for dynamic recalibration of feature significance. This method demonstrates significant improvements, especially with highly non-stationary data, surpassing the state-of-the-art attention techniques in model performance, up to approximately +20% (abs.) in accuracy. Empirically, DAAM exhibits superior adaptability and efficacy across a diverse range of tasks, including emotion recognition in speech, image classification, and text classification, thereby establishing its robustness and versatility in handling data across multiple modalities. Furthermore, we introduce the Importance Factor, a new learning-based metric that enhances the explainability of models trained with DAAM-based methods.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
- L. Wang, “Rrwkv: Capturing long-range dependencies in rwkv,” arXiv preprint arXiv:2306.05176, 2023.
- Y. Zhuang, J. Zhang, and M. Tu, “Long-range sequence modeling with predictable sparse attention,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 271–281.
- H. He, “A unified view of long-sequence models towards million-scale dependencies,” arXiv preprint arXiv:2307.03172, 2023.
- S. Chen, C. Wang, Z. Chen, Y. Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao, J. Wu, L. Zhou, S. Ren, Y. Qian, Y. Qian, J. Wu, M. Zeng, X. Yu, and F. Wei, “Wavlm: Large-scale self-supervised pre-training for full stack speech processing,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 6, pp. 1505–1518, 2022.
- W.-N. Hsu, B. Bolte, Y.-H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, “Hubert: Self-supervised speech representation learning by masked prediction of hidden units,” 2021.
- H. Touvron and L. M. et al., “Llama 2: Open foundation and fine-tuned chat models,” 2023.
- H. Bao, L. Dong, S. Piao, and F. Wei, “Beit: Bert pre-training of image transformers,” 2022.
- S. Chen, J. Xie, and J. H. L. Hansen, “Fearless: Feature refinement loss for ensembling self-supervised learning features in robust end-to-end speech recognition,” in Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022, H. Ko and J. H. L. Hansen, Eds. ISCA, 2022.
- Y. Li, “Unlocking context constraints of llms: Enhancing context efficiency of llms with self-information-based content filtering,” arXiv preprint arXiv:2304.12102, 2023. [Online]. Available: https://dx.doi.org/10.48550/arXiv.2304.12102
- B. L. Edelman, S. Goel, S. Kakade, and C. Zhang, “Inductive biases and variable creation in self-attention mechanisms,” International Conference on Machine Learning (ICML), 2022. [Online]. Available: https://arxiv.org/abs/2110.10090
- M. Hahn, “Theoretical Limitations of Self-Attention in Neural Sequence Models,” Transactions of the Association for Computational Linguistics, vol. 8, pp. 156–171, 2020. [Online]. Available: https://doi.org/10.1162/tacl_a_00306
- M. Bhan, N. Achache, V. Legrand, A. Blangero, and N. Chesneau, “Evaluating self-attention interpretability through human-grounded experimental protocol,” in Explainable Artificial Intelligence, L. Longo, Ed. Cham: Springer Nature Switzerland, 2023, pp. 26–46.
- Z. Tao, X. Liu, Y. Xia, X. Wang, L. Yang, X. Huang, and T.-S. Chua, “Self-supervised learning for multimedia recommendation,” IEEE Transactions on Multimedia, 2022. [Online]. Available: https://dx.doi.org/10.1109/TMM.2022.3187556
- D. Patrick, M. Geyer, R. Tran, and A. Fernandez, “Reconstructive training for real-world robustness in image classification,” in IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW), 2022. [Online]. Available: https://dx.doi.org/10.1109/WACVW54805.2022.00031
- F. M. Yıldırım, A. Kaya, S. Öztürk, and D. Kılınç, “A real-world text classification application for an e-commerce platform,” in International Symposium on Advanced Electrical and Communication Technologies (ISAECT), 2019. [Online]. Available: https://dx.doi.org/10.1109/ASYU48272.2019.8946337
- W. Jin, X. Li, and G. Hamarneh, “Rethinking ai explainability and plausibility,” arXiv preprint arXiv:2303.17707, 2023. [Online]. Available: http://arxiv.org/pdf/2303.17707
- W. You, S. Sun, and M. Iyyer, “Hard-coded gaussian attention for neural machine translation,” in Annual Meeting of the Association for Computational Linguistics, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:218487704
- M. Guo, Y. Zhang, and T. Liu, “Gaussian transformer: A lightweight approach for natural language inference,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 6489–6496, Jul. 2019. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/4614
- D. Ruan, D. Wang, Y. Zheng, N. Zheng, and M. Zheng, “Gaussian context transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 15 129–15 138.
- J. Kim, M. El-Khamy, and J. Lee, “T-gsa: Transformer with gaussian-weighted self-attention for speech enhancement,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 6649–6653.
- A. Luo, F. Yang, X. Li, L. Nie, C. Lin, H. Fan, and S. Liu, “Gaflow: Incorporating gaussian attention into optical flow,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2023, pp. 9642–9651.
- C. Chen and B. Li, “An interpretable channelwise attention mechanism based on asymmetric and skewed gaussian distribution,” Pattern Recognition, vol. 139, p. 109467, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S003132032300167X
- J. Xie, Z. Ma, D. Chang, G. Zhang, and J. Guo, “Gpca: A probabilistic framework for gaussian process embedded channel attention,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 11, pp. 8230–8248, 2022.
- J. Fluri, T. Kacprzak, A. Lucchi, A. Schneider, A. Réfrégier, and T. Hofmann, “Full w𝑤witalic_wcdm analysis of kids-1000 weak lensing maps using deep learning,” Physical Review D, 2022.
- J. Ainslie, J. Lee-Thorp, M. de Jong, Y. Zemlyanskiy, F. Lebron, and S. Sanghai, “GQA: Training generalized multi-query transformer models from multi-head checkpoints,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, H. Bouamor, J. Pino, and K. Bali, Eds. Singapore: Association for Computational Linguistics, Dec. 2023, pp. 4895–4901. [Online]. Available: https://aclanthology.org/2023.emnlp-main.298
- N. Shazeer, “Fast transformer decoding: One write-head is all you need,” 2019.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in North American Chapter of the Association for Computational Linguistics, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:52967399
- J. Kahn and M. R. et al., “Libri-light: A benchmark for asr with limited or no supervision,” in International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020.
- G. Chen and S. C. et al., “Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio,” in Interspeech, 2021.
- C. Wang, M. Riviere, A. Lee, A. Wu, C. Talnikar, D. Haziza, M. Williamson, J. Pino, and E. Dupoux, “VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation,” in International Joint Conference on Natural Language Processing, 2021.
- O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 12 2015. [Online]. Available: https://doi.org/10.1007/s11263-015-0816-y
- G. Ioannides, M. Owen, A. Fletcher, V. Rozgic, and C. Wang, “Towards Paralinguistic-Only Speech Representations for End-to-End Speech Emotion Recognition,” in Proc. INTERSPEECH 2023, 2023, pp. 1853–1857.
- X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, Y. W. Teh and M. Titterington, Eds., vol. 9. Chia Laguna Resort, Sardinia, Italy: PMLR, 13–15 May 2010, pp. 249–256. [Online]. Available: https://proceedings.mlr.press/v9/glorot10a.html
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017.
- T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” 2018.
- C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower Provost, S. Kim, J. Chang, S. Lee, and S. Narayanan, “Iemocap: Interactive emotional dyadic motion capture database,” Language Resources and Evaluation, 2008.
- X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28. Curran Associates, Inc., 2015. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf
- A. Krizhevsky, “Learning multiple layers of features from tiny images,” Canadian Institute For Advanced Research, Tech. Rep., 2009.
- Y. Wang, Q. Shi, and T.-H. Chang, “Batch normalization damages federated learning on non-iid data: Analysis and remedy,” in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1–5.
- R. Jaiswal and D. Romero, “Implicit wiener filtering for speech enhancement in non-stationary noise,” in 2021 11th International Conference on Information Science and Technology (ICIST), 2021, pp. 39–47.
- G. Lovisotto, N. Finnie, M. Muñoz, C. K. Mummadi, and J. H. Metzen, “Give me your attention: Dot-product attention considered harmful for adversarial patch robustness,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15 213–15 222, 2022. [Online]. Available: https://api.semanticscholar.org/CorpusID:247748735
- Q. Zhang, S. Zuo, C. Liang, A. Bukharin, P. He, W. Chen, and T. Zhao, “Platon: Pruning large transformer models with upper confidence bound of weight importance,” 2022.
- K. Li, J. Li, D. Guo, X. Yang, and M. Wang, “Transformer-based visual grounding with cross-modality interaction,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 19, no. 6, may 2023. [Online]. Available: https://doi.org/10.1145/3587251
- J. Back, N. Ahn, and J. Kim, “Magnitude attention-based dynamic pruning,” 2023.