Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning (2202.10629v4)
Abstract: In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models and can even learn general task-agnostic representations for efficient finetuning to downstream tasks. However, deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning. This paper provides an overview of model reprogramming to bridge this gap. Model reprogramming enables resource-efficient cross-domain machine learning by repurposing and reusing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning, where the source and target domains can be vastly different. In many applications, model reprogramming outperforms transfer learning and training from scratch. This paper elucidates the methodology of model reprogramming, summarizes existing use cases, provides a theoretical explanation of the success of model reprogramming, and concludes with a discussion on open-ended research questions and opportunities. A list of model reprogramming studies is actively maintained and updated at https://github.com/IBM/model-reprogramming.
- K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11): 4311–4322.
- Reprogrammable-FL: Improving Utility-Privacy Tradeoff in Federated Learning via Model Reprogramming. In First IEEE Conference on Secure and Trustworthy Machine Learning.
- Visual prompting: Modifying pixel space to adapt pre-trained models. arXiv preprint arXiv:2203.17274.
- On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
- Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, volume 33, 1877–1901.
- Visual prompting for adversarial robustness. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE.
- Understanding and Improving Visual Prompting: A Label-Mapping Perspective. In Proceedings of the IEEE conference on computer vision and pattern recognition.
- Adversarial Reprogramming of Pretrained Neural Networks for Fraud Detection. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2935–2939.
- The UCR time series archive. IEEE/CAA Journal of Automatica Sinica, 6(6): 1293–1305.
- Improved Input Reprogramming for GAN Conditioning. arXiv preprint arXiv:2201.02692.
- Adversarial Reprogramming of Neural Networks. In International Conference on Learning Representations.
- WARP: Word-level Adversarial ReProgramming. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4921–4933. Online: Association for Computational Linguistics.
- Low-Resource Music Genre Classification with Cross-Modal Neural Model Reprogramming. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE.
- A survey on contrastive self-supervised learning. Technologies, 9(1): 2.
- Deep Graph Reprogramming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 24345–24354.
- An Improved (Adversarial) Reprogramming Technique for Neural Networks. In International Conference on Artificial Neural Networks, 3–15. Springer.
- Reprogramming GANs via input noise design. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 256–271. Springer.
- Exploring the Benefits of Visual Prompting in Differential Privacy. International Conference on Computer Vision.
- A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning. IEEE Signal Processing Magazine.
- Pretrained transformers as universal computation engines. arXiv preprint arXiv:2103.05247, 1.
- Reprogramming Pretrained Language Models for Antibody Sequence Infilling. In International Conference on Machine Learning, 24398–24419. PMLR.
- Cross-modal Adversarial Reprogramming. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2427–2435.
- Adversarial Reprogramming of Text Classification Neural Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
- Adversarial Reprogramming of Neural Cellular Automata. Distill, 6(5): e00027–004.
- NeuralFuse: Learning to Improve the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes. arXiv preprint arXiv:2306.16869.
- Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration. arXiv preprint arXiv:2209.11604.
- Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources. In International Conference on Machine Learning, 9614–9624.
- Reprogramming Pretrained Language Models for Protein Sequence Representation Learning. arXiv preprint arXiv:2301.02120.
- Towards Efficient Task-Driven Model Reprogramming with Foundation Models. arXiv preprint arXiv:2304.02263.
- Voice2Series: Reprogramming Acoustic Models for Time Series Classification. In International Conference on Machine Learning.
- Neural model reprogramming with similarity based mapping for low-resource spoken command classification. INTERSPEECH Conference.
- Fairness reprogramming. Advances in Neural Information Processing Systems, 35: 34347–34362.
- Why adversarial reprogramming works, when it fails, and how to tell the difference. Information Sciences, 632: 130–143.