Continual Referring Expression Comprehension via Dual Modular Memorization (2311.14909v1)
Abstract: Referring Expression Comprehension (REC) aims to localize an image region of a given object described by a natural-language expression. While promising performance has been demonstrated, existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios. In this paper, we propose Continual Referring Expression Comprehension (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks. In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization (DMM), which alleviates the problem of catastrophic forgetting by two memorization modules: Implicit-Memory and Explicit-Memory. Specifically, the former module aims to constrain drastic changes to important parameters learned on old tasks when learning a new task; while the latter module maintains a buffer pool to dynamically select and store representative samples of each seen task for future rehearsal. We create three benchmarks for the new CREC setting, by respectively re-splitting three widely-used REC datasets RefCOCO, RefCOCO+ and RefCOCOg into sequential tasks. Extensive experiments on the constructed benchmarks demonstrate that our DMM method significantly outperforms other alternatives, based on two popular REC backbones. We make the source code and benchmarks publicly available to foster future progress in this field: https://github.com/zackschen/DMM.
- L. Yu, Z. Lin, X. Shen, J. Yang, X. Lu, M. Bansal, and T. L. Berg, “Mattnet: Modular attention network for referring expression comprehension,” in CVPR, 2018, pp. 1307–1315.
- X. Rong, C. Yi, and Y. Tian, “Unambiguous scene text segmentation with referring expression comprehension,” IEEE Trans. Image Process., vol. 29, pp. 591–601, 2020.
- J. Liu, W. Wang, L. Wang, and M. Yang, “Attribute-guided attention for referring expression generation and comprehension,” IEEE Trans. Image Process., vol. 29, pp. 5244–5258, 2020.
- K. Lee, X. Chen, G. Hua, H. Hu, and X. He, “Stacked cross attention for image-text matching,” in ECCV, vol. 11208, 2018, pp. 212–228.
- Y. Wang, H. Yang, X. Qian, L. Ma, J. Lu, B. Li, and X. Fan, “Position focused attention network for image-text matching,” in IJCAI, 2019, pp. 3792–3798.
- C. Fuh, S. Cho, and K. Essig, “Hierarchical color image region segmentation for content-based image retrieval system,” IEEE Trans. Image Process., vol. 9, pp. 156–162, 2000.
- R. Zhang and Z. Zhang, “Effective image retrieval based on hidden concept discovery in image database,” IEEE Trans. Image Process., vol. 16, pp. 562–572, 2007.
- L. Gao, P. Zeng, J. Song, Y. Li, W. Liu, T. Mei, and H. T. Shen, “Structured two-stream attention network for video question answering,” in AAAI, 2019, pp. 6391–6398.
- Y. Zhang, J. C. Niebles, and A. Soto, “Interpretable visual question answering by visual grounding from attention supervision mining,” in WACV, 2019, pp. 349–357.
- L. Gao, Y. Lei, P. Zeng, J. Song, M. Wang, and H. T. Shen, “Hierarchical representation network with auxiliary tasks for video captioning and video question answering,” IEEE Trans. Image Process., vol. 31, pp. 202–215, 2022.
- H. Tan, L. Yu, and M. Bansal, “Learning to navigate unseen environments: Back translation with environmental dropout,” in NAACL-HLT, 2019, pp. 2610–2621.
- F. Zhu, Y. Zhu, X. Chang, and X. Liang, “Vision-language navigation with self-supervised auxiliary reasoning tasks,” in CVPR, 2020, pp. 10 009–10 019.
- L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg, “Modeling context in referring expressions,” in ECCV, vol. 9906, 2016, pp. 69–85.
- J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy, “Generation and comprehension of unambiguous object descriptions,” in CVPR, 2016, pp. 11–20.
- R. Hu, H. Xu, M. Rohrbach, J. Feng, K. Saenko, and T. Darrell, “Natural language object retrieval,” in CVPR, 2016, pp. 4555–4564.
- K. Chen, R. Kovvuri, and R. Nevatia, “Query-guided regression network with context policy for phrase grounding,” in ICCV, 2017, pp. 824–832.
- R. Luo and G. Shakhnarovich, “Comprehension-guided referring expressions,” in CVPR, 2017, pp. 3125–3134.
- A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele, “Grounding of textual phrases in images by reconstruction,” in ECCV, vol. 9905, 2016, pp. 817–834.
- X. Liu, Z. Wang, J. Shao, X. Wang, and H. Li, “Improving referring expression grounding with cross-modal attention-guided erasing,” in CVPR, 2019, pp. 1950–1959.
- D. L. Silver, Q. Yang, and L. Li, “Lifelong machine learning systems: Beyond learning algorithms,” in AAAI Spring Symposium: Lifelong Machine Learning, vol. SS-13-05, 2013.
- F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in ICML, vol. 70, 2017, pp. 3987–3995.
- I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint arXiv:1312.6211, 2013.
- J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” CoRR, vol. abs/1612.00796, 2016.
- R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars, “Memory aware synapses: Learning what (not) to forget,” in ECCV, vol. 11207, 2018, pp. 144–161.
- A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” CoRR, vol. abs/1606.04671, 2016.
- T. Veniat, L. Denoyer, and M. Ranzato, “Efficient continual learning with modular networks and task-driven priors,” in ICLR, 2021.
- D. Lopez-Paz and M. Ranzato, “Gradient episodic memory for continual learning,” in NeurIPS, 2017, pp. 6467–6476.
- P. Sprechmann, S. M. Jayakumar, J. W. Rae, A. Pritzel, A. P. Badia, B. Uria, O. Vinyals, D. Hassabis, R. Pascanu, and C. Blundell, “Memory-based parameter adaptation,” in ICLR, 2018.
- S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in CVPR, 2017, pp. 5533–5542.
- A. Ayub and A. R. Wagner, “EEC: learning to encode and regenerate images for continual learning,” in ICLR, 2021.
- M. Riemer, I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, and G. Tesauro, “Learning to learn without forgetting by maximizing transfer and minimizing interference,” in ICLR, 2019.
- K. Javed and M. White, “Meta-learning representations for continual learning,” in NeurIPS, 2019, pp. 1818–1828.
- P. Wang, Q. Wu, J. Cao, C. Shen, L. Gao, and A. van den Hengel, “Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks,” in CVPR, 2019, pp. 1960–1968.
- S. Yang, G. Li, and Y. Yu, “Cross-modal relationship inference for grounding referring expressions,” in CVPR, 2019, pp. 4145–4154.
- ——, “Dynamic graph attention for referring expression comprehension,” in ICCV, 2019, pp. 4643–4652.
- J. Mao, C. Gan, P. Kohli, J. B. Tenenbaum, and J. Wu, “The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision,” in ICLR, 2019.
- R. Zeng, H. Xu, W. Huang, P. Chen, M. Tan, and C. Gan, “Dense regression network for video grounding,” in CVPR, 2020, pp. 10 284–10 293.
- S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, pp. 1137–1149, 2017.
- Z. Yang, B. Gong, L. Wang, W. Huang, D. Yu, and J. Luo, “A fast and accurate one-stage approach to visual grounding,” in ICCV, 2019, pp. 4682–4692.
- Z. Yang, T. Chen, L. Wang, and J. Luo, “Improving one-stage visual grounding by recursive sub-query construction,” in ECCV, vol. 12359, 2020, pp. 387–404.
- Y. Liao, S. Liu, G. Li, F. Wang, Y. Chen, C. Qian, and B. Li, “A real-time cross-modality correlation filtering method for referring expression comprehension,” in CVPR, 2020, pp. 10 877–10 886.
- A. Prabhu, P. H. S. Torr, and P. K. Dokania, “Gdumb: A simple approach that questions our progress in continual learning,” in ECCV, vol. 12347, 2020, pp. 524–540.
- T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: common objects in context,” vol. 8693, pp. 740–755, 2014.
- K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” in ICCV, 2017, pp. 2980–2988.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Z. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” pp. 8024–8035, 2019.
- Heng Tao Shen (117 papers)
- Cheng Chen (262 papers)
- Peng Wang (832 papers)
- Lianli Gao (99 papers)
- Meng Wang (1063 papers)
- Jingkuan Song (115 papers)