Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Balancing the Causal Effects in Class-Incremental Learning (2402.10063v1)

Published 15 Feb 2024 in cs.LG

Abstract: Class-Incremental Learning (CIL) is a practical and challenging problem for achieving general artificial intelligence. Recently, Pre-Trained Models (PTMs) have led to breakthroughs in both visual and natural language processing tasks. Despite recent studies showing PTMs' potential ability to learn sequentially, a plethora of work indicates the necessity of alleviating the catastrophic forgetting of PTMs. Through a pilot study and a causal analysis of CIL, we reveal that the crux lies in the imbalanced causal effects between new and old data. Specifically, the new data encourage models to adapt to new classes while hindering the adaptation of old classes. Similarly, the old data encourages models to adapt to old classes while hindering the adaptation of new classes. In other words, the adaptation process between new and old classes conflicts from the causal perspective. To alleviate this problem, we propose Balancing the Causal Effects (BaCE) in CIL. Concretely, BaCE proposes two objectives for building causal paths from both new and old data to the prediction of new and classes, respectively. In this way, the model is encouraged to adapt to all classes with causal effects from both new and old data and thus alleviates the causal imbalance problem. We conduct extensive experiments on continual image classification, continual text classification, and continual named entity recognition. Empirical results show that BaCE outperforms a series of CIL methods on different tasks and settings.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (92)
  1. A. Prabhu, P. H. Torr, and P. K. Dokania, “Gdumb: A simple approach that questions our progress in continual learning,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16.   Springer, 2020, pp. 524–540.
  2. P. Buzzega, M. Boschini, A. Porrello, D. Abati, and S. Calderara, “Dark experience for general continual learning: a strong, simple baseline,” Advances in neural information processing systems, vol. 33, pp. 15 920–15 930, 2020.
  3. G. M. Van de Ven and A. S. Tolias, “Three scenarios for continual learning,” arXiv preprint arXiv:1904.07734, 2019.
  4. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  5. X. Hu, K. Tang, C. Miao, X.-S. Hua, and H. Zhang, “Distilling causal effect of data in class-incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3957–3966.
  6. S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
  7. S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
  8. Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, and Y. Fu, “Large scale incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 374–382.
  9. A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,” arXiv preprint arXiv:1902.10486, 2019.
  10. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  11. K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000–16 009.
  12. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020.
  13. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).   Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186. [Online]. Available: https://aclanthology.org/N19-1423
  14. OpenAI, “Gpt-4 technical report,” ArXiv, vol. abs/2303.08774, 2023.
  15. Z. Wang, Z. Zhang, C.-Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. Dy, and T. Pfister, “Learning to prompt for continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 139–149.
  16. Y. Huang, Y. Zhang, J. Chen, X. Wang, and D. Yang, “Continual learning for text classification with information disentanglement based regularization,” in Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021, pp. 2736–2746.
  17. J. Zheng, Z. Liang, H. Chen, and Q. Ma, “Distilling causal effect from miscellaneous other-class for continual named entity recognition,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 3602–3615.
  18. C. de Masson D’Autume, S. Ruder, L. Kong, and D. Yogatama, “Episodic memory in lifelong language learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  19. V. V. Ramasesh, A. Lewkowycz, and E. Dyer, “Effect of scale on catastrophic forgetting in neural networks,” in International Conference on Learning Representations, 2022.
  20. M. Tao, Y. Feng, and D. Zhao, “Can bert refrain from forgetting on sequential tasks? a probing study,” in The Eleventh International Conference on Learning Representations, 2023.
  21. J. Chen, T. Nguyen, D. Gorur, and A. Chaudhry, “Is forgetting less a good inductive bias for forward transfer?” in The Eleventh International Conference on Learning Representations, 2023.
  22. A. Krizhevsky et al., “Learning multiple layers of features from tiny images,” 2009.
  23. N. Japkowicz and S. Stephen, “The class imbalance problem: A systematic study,” Intelligent data analysis, vol. 6, no. 5, pp. 429–449, 2002.
  24. H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263–1284, 2009.
  25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  26. P. Wang, Y. Song, T. Liu, R. Gao, B. Lin, Y. Cao, and Z. Sui, “Less is more: Rethinking state-of-the-art continual relation extraction models with a frustratingly easy but effective approach,” arXiv preprint arXiv:2209.00243, 2022.
  27. Z. Li and D. Hoiem, “Learning without forgetting,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 12, pp. 2935–2947, 2017.
  28. F. Zenke, B. Poole, and S. Ganguli, “Continual learning through synaptic intelligence,” in International conference on machine learning.   PMLR, 2017, pp. 3987–3995.
  29. E. Arani, F. Sarfraz, and B. Zonooz, “Learning fast, learning slow: A general continual learning method based on complementary learning system,” in International Conference on Learning Representations, 2022.
  30. J. Serra, D. Suris, M. Miron, and A. Karatzoglou, “Overcoming catastrophic forgetting with hard attention to the task,” in International Conference on Machine Learning.   PMLR, 2018, pp. 4548–4557.
  31. J. Rajasegaran, M. Hayat, S. H. Khan, F. S. Khan, and L. Shao, “Random path selection for continual learning,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  32. S. Yan, J. Xie, and X. He, “Der: Dynamically expandable representation for class incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3014–3023.
  33. D. Kim and B. Han, “On the stability-plasticity dilemma of class-incremental learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 20 196–20 204.
  34. D.-W. Zhou, Q.-W. Wang, H.-J. Ye, and D.-C. Zhan, “A model or 603 exemplars: Towards memory-efficient class-incremental learning,” arXiv preprint arXiv:2205.13218, 2023.
  35. M. Wortsman, V. Ramanujan, R. Liu, A. Kembhavi, M. Rastegari, J. Yosinski, and A. Farhadi, “Supermasks in superposition,” Advances in Neural Information Processing Systems, vol. 33, pp. 15 173–15 184, 2020.
  36. G. Kim, C. Xiao, T. Konishi, Z. Ke, and B. Liu, “A theoretical study on solving continual learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 5065–5079, 2022.
  37. G. Kim, B. Liu, and Z. Ke, “A multi-head model for continual learning via out-of-distribution replay,” in Conference on Lifelong Learning Agents.   PMLR, 2022, pp. 548–563.
  38. Z. Wang, Z. Zhang, S. Ebrahimi, R. Sun, H. Zhang, C.-Y. Lee, X. Ren, G. Su, V. Perot, J. Dy et al., “Dualprompt: Complementary prompting for rehearsal-free continual learning,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVI.   Springer, 2022, pp. 631–648.
  39. P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
  40. B. Ermis, G. Zappella, M. Wistuba, A. Rawal, and C. Archambeau, “Memory efficient continual learning with transformers,” Advances in Neural Information Processing Systems, vol. 35, pp. 10 629–10 642, 2022.
  41. A. Razdaibiedina, Y. Mao, R. Hou, M. Khabsa, M. Lewis, and A. Almahairi, “Progressive prompts: Continual learning for language models,” in The Eleventh International Conference on Learning Representations, 2023.
  42. N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, “Parameter-efficient transfer learning for nlp,” in International Conference on Machine Learning.   PMLR, 2019, pp. 2790–2799.
  43. Y. Wang, Z. Huang, and X. Hong, “S-prompts learning with pre-trained transformers: An occam’s razor for domain incremental learning,” in Advances in Neural Information Processing Systems, 2022.
  44. Z. Ke, Y. Shao, H. Lin, H. Xu, L. Shu, and B. Liu, “Adapting a language model while preserving its general knowledge,” in Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022, pp. 10 177–10 188.
  45. Z. Ke, Y. Shao, H. Lin, T. Konishi, G. Kim, and B. Liu, “Continual pre-training of language models,” in The Eleventh International Conference on Learning Representations, 2022.
  46. J. Jang, S. Ye, S. Yang, J. Shin, J. Han, K. Gyeonghun, S. J. Choi, and M. Seo, “Towards continual knowledge learning of language models,” in International Conference on Learning Representations, 2022.
  47. Z. Wang, S. V. Mehta, B. Poczós, and J. G. Carbonell, “Efficient meta lifelong-learning with limited memory,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 535–548.
  48. F.-K. Sun, C.-H. Ho, and H.-Y. Lee, “Lamol: Language modeling for lifelong language learning,” in International Conference on Learning Representations, 2020.
  49. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., “Language models are unsupervised multitask learners,” OpenAI blog, vol. 1, no. 8, p. 9, 2019.
  50. Y. Xia, Q. Wang, Y. Lyu, Y. Zhu, W. Wu, S. Li, and D. Dai, “Learn and review: Enhancing continual named entity recognition via reviewing synthetic samples,” in Findings of the Association for Computational Linguistics: ACL 2022, 2022, pp. 2291–2300.
  51. Y. Zhang, P. Li, M. Sun, and Y. Liu, “Continual knowledge distillation for neural machine translation,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).   Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 7978–7996. [Online]. Available: https://aclanthology.org/2023.acl-long.443
  52. H. Xia, P. Wang, T. Liu, B. Lin, Y. Cao, and Z. Sui, “Enhancing continual relation extraction via classifier decomposition,” in Findings of the Association for Computational Linguistics: ACL 2023.   Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 10 053–10 062. [Online]. Available: https://aclanthology.org/2023.findings-acl.638
  53. E. Belouadah and A. Popescu, “Il2m: Class incremental learning with dual memory,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 583–592.
  54. K. Tang, J. Huang, and H. Zhang, “Long-tailed classification by keeping the good and removing the bad momentum causal effect,” Advances in Neural Information Processing Systems, vol. 33, pp. 1513–1524, 2020.
  55. G. Nan, J. Zeng, R. Qiao, Z. Guo, and W. Lu, “Uncovering main causalities for long-tailed information extraction,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 9683–9695.
  56. W. Zhang, H. Lin, X. Han, and L. Sun, “De-biasing distantly supervised named entity recognition via causal intervention,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 4803–4813.
  57. Q. Zhu, W. Zhang, T. Liu, and W. Y. Wang, “Counterfactual off-policy training for neural dialogue generation,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 3438–3448.
  58. J. Zheng, Q. Ma, S. Qiu, Y. Wu, P. Ma, J. Liu, H. Feng, X. Shang, and H. Chen, “Preserving commonsense knowledge from pre-trained language models via causal inference,” arXiv preprint arXiv:2306.10790, 2023.
  59. M. Mundt, K. W. Cooper, D. S. Dhami, A. Ribeiro, J. S. Smith, A. Bellot, and T. Hayes, “Continual causality: A retrospective of the inaugural aaai-23 bridge program,” in AAAI Bridge Program on Continual Causality.   PMLR, 2023, pp. 1–10.
  60. K. Javed, M. White, and Y. Bengio, “Learning causal models online,” arXiv preprint arXiv:2006.07461, 2020.
  61. T. Gong, T. Gerstenberg, R. Mayrhofer, and N. R. Bramley, “Active causal structure learning in continuous time,” Cognitive Psychology, vol. 140, p. 101542, 2023.
  62. G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
  63. S. Ebrahimi, F. Meier, R. Calandra, T. Darrell, and M. Rohrbach, “Adversarial continual learning,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16.   Springer, 2020, pp. 386–402.
  64. Y. Zhang, Z. Yin, J. Shao, and Z. Liu, “Benchmarking omni-vision representation through the lens of visual realms,” in European Conference on Computer Vision, 2022, pp. 594–611.
  65. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  66. A. Barbu, D. Mayo, J. Alverio, W. Luo, C. Wang, D. Gutfreund, J. Tenenbaum, and B. Katz, “Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models,” Advances in neural information processing systems, vol. 32, 2019.
  67. D. Hendrycks, S. Basart, N. Mu, S. Kadavath, F. Wang, E. Dorundo, R. Desai, T. Zhu, S. Parajuli, M. Guo et al., “The many faces of robustness: A critical analysis of out-of-distribution generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 8340–8349.
  68. X. Zhai, J. Puigcerver, A. Kolesnikov, P. Ruyssen, C. Riquelme, M. Lucic, J. Djolonga, A. S. Pinto, M. Neumann, A. Dosovitskiy et al., “A large-scale study of representation learning with the visual task adaptation benchmark,” arXiv preprint arXiv:1910.04867, 2019.
  69. A. Chaudhry, P. K. Dokania, T. Ajanthan, and P. H. Torr, “Riemannian walk for incremental learning: Understanding forgetting and intransigence,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 532–547.
  70. D. Lopez-Paz and M. Ranzato, “Gradient episodic memory for continual learning,” Advances in neural information processing systems, vol. 30, 2017.
  71. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  72. A. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, “Podnet: Pooled outputs distillation for small-tasks incremental learning,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XX 16.   Springer, 2020, pp. 86–102.
  73. F.-Y. Wang, D.-W. Zhou, H.-J. Ye, and D.-C. Zhan, “Foster: Feature boosting and compression for class-incremental learning,” in European conference on computer vision.   Springer, 2022, pp. 398–414.
  74. F.-Y. Wang, D.-W. Zhou, L. Liu, H.-J. Ye, Y. Bian, D.-C. Zhan, and P. Zhao, “Beef: Bi-compatible class-incremental learning via energy-based expansion and fusion,” in The Eleventh International Conference on Learning Representations, 2023.
  75. H. Cha, J. Lee, and J. Shin, “Co2l: Contrastive continual learning,” in Proceedings of the IEEE/CVF International conference on computer vision, 2021, pp. 9516–9525.
  76. G. Kim, C. Xiao, T. Konishi, and B. Liu, “Learnability and algorithm for continual learning,” in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, ser. Proceedings of Machine Learning Research, A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, Eds., vol. 202.   PMLR, 2023, pp. 16 877–16 896. [Online]. Available: https://proceedings.mlr.press/v202/kim23x.html
  77. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers & distillation through attention,” in International conference on machine learning.   PMLR, 2021, pp. 10 347–10 357.
  78. D.-W. Zhou, H.-J. Ye, D.-C. Zhan, and Z. Liu, “Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need,” arXiv preprint arXiv:2303.07338, 2023.
  79. Y. LeCun, “The mnist database of handwritten digits,” http://yann. lecun. com/exdb/mnist/, 1998.
  80. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv preprint arXiv:1708.07747, 2017.
  81. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” 2011.
  82. Y. Bulatov, “Notmnist dataset,” Google (Books/OCR), Tech. Rep.[Online]. Available: http://yaroslavvb. blogspot. it/2011/09/notmnist-dataset. html, vol. 2, 2011.
  83. S. V. Mehta, D. Patil, S. Chandar, and E. Strubell, “An empirical investigation of the role of pre-training in lifelong learning,” arXiv preprint arXiv:2112.09153, 2021.
  84. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz et al., “Huggingface’s transformers: State-of-the-art natural language processing,” arXiv preprint arXiv:1910.03771, 2019.
  85. X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” Advances in neural information processing systems, vol. 28, 2015.
  86. P. Sprechmann, S. M. Jayakumar, J. W. Rae, A. Pritzel, A. P. Badia, B. Uria, O. Vinyals, D. Hassabis, R. Pascanu, and C. Blundell, “Memory-based parameter adaptation,” in International Conference on Learning Representations, 2018.
  87. E. T. K. Sang and F. De Meulder, “Introduction to the conll-2003 shared task: Language-independent named entity recognition,” in Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 2003, pp. 142–147.
  88. S. N. Murphy, G. Weber, M. Mendis, V. Gainer, H. C. Chueh, S. Churchill, and I. Kohane, “Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2),” Journal of the American Medical Informatics Association, vol. 17, no. 2, pp. 124–130, 2010.
  89. J. Liu, P. Pasupat, S. Cyphers, and J. R. Glass, “Asgard: A portable architecture for multilingual dialogue systems,” in IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, pp. 8386–8390.
  90. J. Liu, P. Pasupat, Y. Wang, S. Cyphers, and J. R. Glass, “Query understanding enhanced by hierarchical parsing structures,” in IEEE Workshop on Automatic Speech Recognition and Understanding, 2013, pp. 72–77.
  91. E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel, “Ontonotes: the 90% solution,” in Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers, 2006, pp. 57–60.
  92. N. Monaikul, G. Castellucci, S. Filice, and O. Rokhlenko, “Continual learning for named entity recognition,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 15, 2021, pp. 13 570–13 577.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Junhao Zheng (22 papers)
  2. Ruiyan Wang (1 paper)
  3. Chongzhi Zhang (14 papers)
  4. Huawen Feng (8 papers)
  5. Qianli Ma (77 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets