Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices (2403.12503v1)

Published 19 Mar 2024 in cs.CR, cs.AI, and cs.LG
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices

Abstract: LLMs have significantly transformed the landscape of NLP. Their impact extends across a diverse spectrum of tasks, revolutionizing how we approach language understanding and generations. Nevertheless, alongside their remarkable utility, LLMs introduce critical security and risk considerations. These challenges warrant careful examination to ensure responsible deployment and safeguard against potential vulnerabilities. This research paper thoroughly investigates security and privacy concerns related to LLMs from five thematic perspectives: security and privacy concerns, vulnerabilities against adversarial attacks, potential harms caused by misuses of LLMs, mitigation strategies to address these challenges while identifying limitations of current strategies. Lastly, the paper recommends promising avenues for future research to enhance the security and risk management of LLMs.

Securing LLMs: Navigating the Evolving Threat Landscape

Security Risks and Vulnerabilities of LLMs

The field of LLMs involves significant security and privacy considerations. These systems, although transformative, are susceptible to various avenues of exploitation. The pre-training phase intricately involves massive datasets that potentially embed sensitive information, underlying the risk of inadvertent data leakage. Moreover, the capability of LLMs to generate realistic, human-like text opens doors to creating biased, toxic, or even defamatory content, presenting legal and reputational hazards. Intellectual property infringement through unsanctioned content replication and potential bypasses of security mechanisms exemplify other critical concerns. The susceptibility of LLMs to cyber-attacks, including those aimed at data corruption or system manipulation, underscores the urgency for robust security measures.

Exploring Mitigation Strategies

The mitigation of risks associated with LLMs entails a multi-faceted approach:

  • Model-based Vulnerabilities: Addressing model-based vulnerabilities requires a focus on minimizing model extraction and imitation risks. Strategies include implementing watermarking techniques to assert model ownership and deploying adversarial detection mechanisms to identify unauthorized use.
  • Training-Time Vulnerabilities: Mitigating training-time vulnerabilities involves procedures to detect and sanitize poisoned data sets, thereby averting backdoor attacks. Employing red teaming strategies to identify potential weaknesses during the model development phase is paramount.
  • Inference-Time Vulnerabilities: To counter inference-time vulnerabilities, adopting prompt injection detection systems and safeguarding against paraphrasing attacks are indispensable. Prompt monitoring and adaptive response mechanisms can deter malicious exploitation attempts.

Future Directions in AI Security

The dynamic and complex nature of LLMs necessitates continuous research into developing more advanced security protocols and ethical guidelines. Here are several prospective avenues for further exploration:

  • Enhanced Red and Green Teaming: Implementing comprehensive red and green teaming exercises can reveal hidden vulnerabilities and assess the ethical implications of LLM outputs, thereby informing more secure deployment strategies.
  • Improved Detection Techniques: Advancing the development and implementation of sophisticated AI-generated text detection technologies will be crucial for distinguishing between human and machine-generated content, thus preventing misinformation spread.
  • Robust Editing Mechanisms: Investing in research on editing LLMs to correct for biases, reduce hallucination, and enhance factuality will aid in minimizing the generation of harmful or misleading content.
  • Interdisciplinary Collaboration: Fostering collaborative efforts across cybersecurity, AI ethics, and legal disciplines can provide a holistic approach to understanding and mitigating the risks posed by LLMs.

Conclusion

The security landscape of LLMs is fraught with challenges yet offers ample opportunities for substantive breakthroughs in AI safety and integrity. As we continue to interweave AI more deeply into the fabric of digital societies, prioritizing the development of comprehensive, ethical, and robust security measures is imperative. By fostering a culture of proactive risk management and ethical AI use, we can navigate the complexities of LLMs, paving the way for their responsible and secure application across various domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (187)
  1. 2023. Openai. ai text classifier.
  2. 2023. Zerogpt: Ai text detector.
  3. code2seq: Generating sequences from structured representations of code. ArXiv, abs/1808.01400.
  4. Anonymous. 2023. How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions. In Submitted to The Twelfth International Conference on Learning Representations. Under review.
  5. Towards a robust detection of language model generated text: Is chatgpt that easy to detect? ArXiv, abs/2306.05871.
  6. Is github’s copilot as bad as humans at introducing vulnerabilities in code?
  7. Real or fake? learning to discriminate machine from human generated text. ArXiv, abs/1906.03351.
  8. Bert-based sentiment analysis: A software engineering perspective. In International Conference on Database and Expert Systems Applications.
  9. Rishabh Bhardwaj and Soujanya Poria. 2023. Red-teaming large language models using chain of utterances for safety-alignment. ArXiv, abs/2308.09662.
  10. Investigating answerability of llms for long-form question answering. ArXiv, abs/2309.08210.
  11. Emergent and predictable memorization in large language models. arXiv preprint arXiv:2304.11158.
  12. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR.
  13. Model leeching: An extraction attack targeting llms. ArXiv, abs/2309.10544.
  14. Model leeching: An extraction attack targeting llms. arXiv preprint arXiv:2309.10544.
  15. Jaydeep Borkar. 2023. What can we learn from data leakage and unlearning for law? arXiv preprint arXiv:2307.10476.
  16. Learning from examples to improve code completion systems. In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, ESEC/FSE ’09, page 213–222, New York, NY, USA. Association for Computing Machinery.
  17. Badprompt: Backdoor attacks on continuous prompts. ArXiv, abs/2211.14719.
  18. Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463–480. IEEE.
  19. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650.
  20. Explore, establish, exploit: Red teaming language models from scratch. arXiv preprint arXiv:2306.09442.
  21. On the possibilities of ai-generated text detection. ArXiv, abs/2304.04736.
  22. From text to mitre techniques: Exploring the malicious use of large language models for generating cyber attack payloads. ArXiv, abs/2305.15336.
  23. Honghua Chen and Nai Ding. 2023. Probing the “creativity” of large language models: Can models produce divergent semantic association? In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12881–12888, Singapore. Association for Computational Linguistics.
  24. Can large language models understand content and propagation for misinformation detection: An empirical study. ArXiv, abs/2311.12699.
  25. Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In Annual computer security applications conference, pages 554–569.
  26. Targeted backdoor attacks on deep learning systems using data poisoning. ArXiv, abs/1712.05526.
  27. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023).
  28. Deep reinforcement learning from human preferences.
  29. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  30. Training verifiers to solve math word problems. ArXiv, abs/2110.14168.
  31. Fft: Towards harmlessness evaluation and analysis for llms with factuality, fairness, toxicity. ArXiv, abs/2311.18580.
  32. Jailbreaker: Automated jailbreak across multiple large language model chatbots. arXiv preprint arXiv:2307.08715.
  33. Masterkey: Automated jailbreak across multiple large language model chatbots.
  34. Toxicity in chatgpt: Analyzing persona-assigned language models. ArXiv, abs/2304.05335.
  35. Anthropomorphization of ai: Opportunities and risks. ArXiv, abs/2305.14784.
  36. Quantifying and attributing the hallucination of large language models via association analysis. ArXiv, abs/2309.05217.
  37. Improving factuality and reasoning in language models through multiagent debate.
  38. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. ArXiv, abs/2209.07858.
  39. Mart: Improving llm safety with multi-round automatic red-teaming. ArXiv, abs/2311.07689.
  40. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462.
  41. Gltr: Statistical detection and visualization of generated text. In Annual Meeting of the Association for Computational Linguistics.
  42. Koala: A dialogue model for academic research. Blog post, April, 1.
  43. Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection.
  44. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717.
  45. Reinforced self-training (rest) for language modeling. ArXiv, abs/2308.08998.
  46. Editing commonsense knowledge in gpt. arXiv preprint arXiv:2305.14956.
  47. Realm: Retrieval-augmented language model pre-training. ICML’20. JMLR.org.
  48. Evaluating large language models in generating synthetic hci research data: a case study. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, page 3580688. ACM.
  49. Fedmlsecurity: A benchmark for attacks and defenses in federated learning and llms. ArXiv, abs/2306.04959.
  50. Sok: Memorization in general-purpose large language models. ArXiv, abs/2310.18362.
  51. Sok: Memorization in general-purpose large language models. arXiv preprint arXiv:2310.18362.
  52. Mgtbench: Benchmarking machine-generated text detection. ArXiv, abs/2303.14822.
  53. Token-level adversarial prompt detection based on perplexity measures and contextual information. ArXiv, abs/2311.11509.
  54. Composite backdoor attacks against large language models. ArXiv, abs/2310.07676.
  55. Are large pre-trained language models leaking your personal information? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2038–2047, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  56. A survey of safety and trustworthiness of large language models through the lens of verification and validation. ArXiv, abs/2305.11391.
  57. Baseline defenses for adversarial attacks against aligned language models. ArXiv, abs/2309.00614.
  58. Large language models and simple, stupid bugs. ArXiv, abs/2303.11455.
  59. Logicllm: Exploring self-supervised logic-enhanced training for large language models. ArXiv, abs/2305.13718.
  60. Language models (mostly) know what they know.
  61. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. ArXiv, abs/2302.05733.
  62. Mohammad Khalil and Erkan Er. 2023. Will chatgpt get you caught? rethinking of plagiarism detection.
  63. Aisha Khatun and Daniel Brown. 2023. Reliability check: An analysis of gpt-3’s response to sensitive topics and prompt wording. ArXiv, abs/2306.06199.
  64. Kiana Kheiri and Hamid Karimi. 2023. Sentimentgpt: Exploiting gpt for advanced sentiment analysis and its departure from current machine learning. ArXiv, abs/2307.10234.
  65. How secure is code generated by chatgpt? ArXiv, abs/2304.09655.
  66. Propile: Probing privacy leakage in large language models. arXiv preprint arXiv:2307.01881.
  67. A watermark for large language models.
  68. On the reliability of watermarks for large language models. ArXiv, abs/2306.04634.
  69. Improving knowledge extraction from llms for task learning through agent analysis.
  70. Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense. ArXiv, abs/2303.13408.
  71. Validating large language models with relm. Proceedings of Machine Learning and Systems, 5.
  72. Llms as factual reasoners: Insights from existing benchmarks and beyond. ArXiv, abs/2305.14540.
  73. Query-efficient black-box red teaming via bayesian optimization. arXiv preprint arXiv:2305.17444.
  74. Who wrote this code? watermarking for code generation. ArXiv, abs/2305.15060.
  75. Solving quantitative reasoning problems with language models.
  76. Multi-step jailbreaking privacy attacks on chatgpt. ArXiv, abs/2304.05197.
  77. Deepfake text detection in the wild. ArXiv, abs/2305.13242.
  78. Neural attention distillation: Erasing backdoor triggers from deep neural networks. arXiv preprint arXiv:2101.05930.
  79. Gpt detectors are biased against non-native english writers. Patterns, 4(7):100779.
  80. A survey of transformers. AI Open, 3:111–132.
  81. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International symposium on research in attacks, intrusions, and defenses, pages 273–294. Springer.
  82. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. ArXiv, abs/2110.07602.
  83. Prompt injection attack against llm-integrated applications. ArXiv, abs/2306.05499.
  84. Check me if you can: Detecting chatgpt-generated academic writing using checkgpt. ArXiv, abs/2306.05524.
  85. Large language models can be guided to evade ai-generated text detection. ArXiv, abs/2305.10847.
  86. Self-refine: Iterative refinement with self-feedback.
  87. Notable: Transferable backdoor attacks against prompt-based nlp models. ArXiv, abs/2305.17826.
  88. Locating and editing factual associations in gpt. Advances in Neural Information Processing Systems, 35:17359–17372.
  89. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229.
  90. Detectgpt: Zero-shot machine-generated text detection using probability curvature. ArXiv, abs/2301.11305.
  91. Fast model editing at scale. arXiv preprint arXiv:2110.11309.
  92. Memory-based model editing at scale. In International Conference on Machine Learning, pages 15817–15831. PMLR.
  93. Use of llms for illicit purposes: Threats, prevention measures, and vulnerabilities. ArXiv, abs/2308.12833.
  94. Minimizing factual inconsistency and hallucination in large language models. ArXiv, abs/2311.13878.
  95. Scalable extraction of training data from (production) language models. arXiv preprint arXiv:2311.17035.
  96. Show your work: Scratchpads for intermediate computation with language models.
  97. Jonas Oppenlaender and Joonas Hamalainen. 2023. Mapping the challenges of hci: An application and evaluation of chatgpt and gpt-4 for mining insights at scale.
  98. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155.
  99. On the risk of misinformation pollution with large language models.
  100. On the risk of misinformation pollution with large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1389–1403, Singapore. Association for Computational Linguistics.
  101. Asleep at the keyboard? assessing the security of github copilot’s code contributions.
  102. Examining zero-shot vulnerability repair with large language models.
  103. To chatgpt, or not to chatgpt: That is the question! ArXiv, abs/2304.01487.
  104. Are you copying my model? protecting the copyright of large language models for eaas via backdoor watermark. In ACL 2023.
  105. Red teaming language models with language models. arXiv preprint arXiv:2202.03286.
  106. Red teaming language models with language models. In Conference on Empirical Methods in Natural Language Processing.
  107. Fábio Perez and Ian Ribeiro. 2022. Ignore previous prompt: Attack techniques for language models. ArXiv, abs/2211.09527.
  108. Adding instructions during pretraining: Effective way of controlling toxicity in language models. arXiv preprint arXiv:2302.07388.
  109. Onion: A simple and effective defense against textual backdoor attacks. arXiv preprint arXiv:2011.10369.
  110. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. arXiv preprint arXiv:2105.12400.
  111. Beyond black box ai-generated plagiarism detection: From sentence to document level. ArXiv, abs/2306.08122.
  112. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
  113. Tricking llms into disobedience: Understanding, analyzing, and preventing jailbreaks. ArXiv, abs/2305.14965.
  114. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
  115. Generating phishing attacks using chatgpt. ArXiv, abs/2305.05133.
  116. Can ai-generated text be reliably detected? ArXiv, abs/2303.11156.
  117. Analysis of chatgpt on source code. ArXiv, abs/2306.00597.
  118. Lost at c: A user study on the security implications of large language model code assistants.
  119. Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks. ArXiv, abs/2006.12557.
  120. Damith Chamalke Senadeera and Julia Ive. 2022. Controlled text generation using t5 based encoder-decoder soft prompt tuning and analysis of the utility of generated text in ai. ArXiv, abs/2212.02924.
  121. Quantifying association capabilities of large language models and its implications on privacy leakage. ArXiv, abs/2305.12707.
  122. Survey of vulnerabilities in large language models revealed by adversarial attacks. ArXiv, abs/2310.10844.
  123. " do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. arXiv preprint arXiv:2308.03825.
  124. Rethinking semi-supervised learning with language models. ArXiv, abs/2305.13002.
  125. Red teaming language model detectors with language models. arXiv preprint arXiv:2305.19713.
  126. Program Synthesis and Semantic Parsing with Learned Code Idioms. Curran Associates Inc., Red Hook, NY, USA.
  127. Reflexion: Language agents with verbal reinforcement learning.
  128. Mondrian: Prompt abstraction attack against large language models for cheaper api pricing. arXiv preprint arXiv:2308.03558.
  129. An empirical study of code smells in transformer-based code generation techniques. In 2022 IEEE 22nd International Working Conference on Source Code Analysis and Manipulation (SCAM), pages 71–82.
  130. Release strategies and the social impacts of language models. ArXiv, abs/1908.09203.
  131. Calibration and correctness of language models for code.
  132. Seeing seeds beyond weeds: Green teaming generative ai for beneficial uses. arXiv preprint arXiv:2306.03097.
  133. Chris Stokel-Walker. 2022. Ai bot chatgpt writes smart essays-should academics worry? Nature.
  134. Detectllm: Leveraging log rank information for zero-shot detection of machine-generated text. ArXiv, abs/2306.05540.
  135. Safety assessment of chinese large language models. ArXiv, abs/2304.10436.
  136. Evaluating the factual consistency of large language models through summarization. ArXiv, abs/2211.08412.
  137. Evaluating the factual consistency of large language models through news summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 5220–5255, Toronto, Canada. Association for Computational Linguistics.
  138. Baselines for identifying watermarked large language models. ArXiv, abs/2305.18456.
  139. The science of detecting llm-generated texts. ArXiv, abs/2303.07205.
  140. Large language models can be lazy learners: Analyze shortcuts in in-context learning. ArXiv, abs/2305.17256.
  141. Stanford alpaca: an instruction-following llama model (2023). URL https://github. com/tatsu-lab/stanford_alpaca.
  142. Edward Tian. 2023. [link].
  143. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems, 35:38274–38290.
  144. M. Caner Tol and Berk Sunar. 2023. Zeroleak: Using llms for scalable and cost effective side-channel patching. ArXiv, abs/2308.13062.
  145. Howkgpt: Investigating the detection of chatgpt-generated university student homework through context-aware perplexity analysis. ArXiv, abs/2305.18226.
  146. Attention is all you need. In Neural Information Processing Systems.
  147. Disinformation capabilities of large language models. ArXiv, abs/2311.08838.
  148. Concealed data poisoning attacks on NLP models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 139–150, Online. Association for Computational Linguistics.
  149. Poisoning language models during instruction tuning. arXiv preprint arXiv:2305.00944.
  150. Decodingtrust: A comprehensive assessment of trustworthiness in gpt models. ArXiv, abs/2306.11698.
  151. Bot or human? detecting chatgpt imposters with a single question. ArXiv, abs/2305.06424.
  152. Enhancing large language models for secure code generation: A dataset-driven study on vulnerability mitigation. ArXiv, abs/2310.16263.
  153. Adversarial demonstration attacks on large language models. ArXiv, abs/2305.14950.
  154. Easyedit: An easy-to-use knowledge editing framework for large language models. arXiv preprint arXiv:2308.07269.
  155. Seqxgpt: Sentence-level ai-generated text detection. ArXiv, abs/2310.08903.
  156. Automatically learning semantic features for defect prediction. In Proceedings of the 38th International Conference on Software Engineering, ICSE ’16, page 297–308, New York, NY, USA. Association for Computing Machinery.
  157. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations.
  158. M4: Multi-generator, multi-domain, and multi-lingual black-box machine-generated text detection. ArXiv, abs/2305.14902.
  159. Testing of detection tools for ai-generated text. International Journal for Educational Integrity, 19:1–39.
  160. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems.
  161. Ethical and social risks of harm from language models. ArXiv, abs/2112.04359.
  162. Unveiling the implicit toxicity in large language models. In Conference on Empirical Methods in Natural Language Processing.
  163. Max Wolff. 2020. Attacking neural text detectors. ArXiv, abs/2002.11768.
  164. Performance evaluation of adversarial attacks: Discrepancies and solutions. ArXiv, abs/2104.11103.
  165. An evaluation on large language model outputs: Discourse and memorization. arXiv preprint arXiv:2304.08637.
  166. Exploring the universal vulnerability of prompt-based learning paradigm. arXiv preprint arXiv:2204.05239.
  167. Exploring the universal vulnerability of prompt-based learning paradigm. ArXiv, abs/2204.05239.
  168. A comprehensive overview of backdoor attacks in large language models within communication networks. ArXiv, abs/2308.14367.
  169. Be careful about poisoned word embeddings: Exploring the vulnerability of the embedding layers in nlp models. ArXiv, abs/2103.15543.
  170. Watermarking text generated by black-box language models. ArXiv, abs/2305.08883.
  171. Dna-gpt: Divergent n-gram analysis for training-free detection of gpt-generated text. ArXiv, abs/2305.17359.
  172. Poisonprompt: Backdoor attack on prompt-based large language models. ArXiv, abs/2310.12439.
  173. Editing large language models: Problems, methods, and opportunities. arXiv preprint arXiv:2305.13172.
  174. Gpt paternity test: Gpt generated text detection with gpt genetic inheritance. ArXiv, abs/2305.12519.
  175. Bert-coqac: Bert-based conversational question answering in context. In International Symposium on Parallel Architectures, Algorithms and Programming.
  176. STar: Bootstrapping reasoning with reasoning. In Advances in Neural Information Processing Systems.
  177. G3detector: General gpt-generated text detector. ArXiv, abs/2305.12680.
  178. Watermarks in the sand: Impossibility of strong watermarking for generative models. ArXiv, abs/2311.04378.
  179. Differentiable prompt makes pre-trained language models better few-shot learners. ArXiv, abs/2108.13161.
  180. Defending large language models against jailbreaking attacks through goal prioritization. ArXiv, abs/2311.09096.
  181. Prompt as triggers for backdoor attack: Examining the vulnerability in language models. ArXiv, abs/2305.01219.
  182. A survey of large language models. ArXiv, abs/2303.18223.
  183. Memorybank: Enhancing large language models with long-term memory. ArXiv, abs/2305.10250.
  184. Relying on the unreliable: The impact of language models’ reluctance to express uncertainty.
  185. Moderate-fitting as a natural backdoor defender for pre-trained language models. In Advances in Neural Information Processing Systems.
  186. Red teaming chatgpt via jailbreaking: Bias, robustness, reliability and toxicity. arXiv preprint arXiv:2301.12867.
  187. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Sara Abdali (14 papers)
  2. Richard Anarfi (3 papers)
  3. CJ Barberan (6 papers)
  4. Jia He (29 papers)
Citations (12)
Youtube Logo Streamline Icon: https://streamlinehq.com