Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Near to Mid-term Risks and Opportunities of Open-Source Generative AI (2404.17047v2)

Published 25 Apr 2024 in cs.LG
Near to Mid-term Risks and Opportunities of Open-Source Generative AI

Abstract: In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. This regulation is likely to put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current LLMs. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.

Analysis of Near to Mid-term Risks and Opportunities of Open Source Generative AI

Introduction to the Study

This paper explores the nuanced domain of open sourcing generative AI (GenAI), focusing on its differential impacts over the near to mid-term phase. It begins by clarifying the stages of AI development and proceeds to provide an empirical analysis of the openness of currently available LLMs. Thereafter, it explores the contrasting risks and opportunities presented by open versus closed source AI models. Central to the paper is a compelling argument for the responsible open sourcing of GenAI models, supported by strategic recommendations for approaching this responsibly.

Modeling and Openness Taxonomy

The paper outlines a tripartite development stage for GenAI, classified into near-term, mid-term, and long-term based on technological adoption and capability rather than a fixed timeline. This categorization is pivotal for understanding the distinct operational, ethical, and societal implications at each stage. A significant portion of the analysis is devoted to assessing the current models' openness using an original taxonomy that grades components of AI systems on their openness. This evaluation brings to light a balance between open and closed components, revealing a skew towards more closed training data and safety evaluations.

Risks and Opportunities of Open Source GenAI

The discourse around open source GenAI is ripe with debates on its scalability and implications. The research highlights numerous benefits such as enhanced flexibility, customization potential, and increased transparency leading to greater public trust. However, these come alongside risks such as potential misuse by bad actors and challenges in controlling dissemination once models are publicly released. A nuanced observation provided is that while open source can facilitate innovation and economic inclusivity, it equally necessitates robust mechanisms to mitigate accompanying safety and societal risks.

Dual-Use and Security Concerns

Open source models, despite their inclusivity and potential for rapid proliferation across diverse applications, can be misappropriated to generate unsafe content or be repurposed by malevolent users. The paper emphasizes this dual-use nature as a pivotal concern needing stringent operational checks and community-led oversight to ensure responsible use.

Economic and Academic Impact

Reflections on the potential economic benefits of open source GenAI underscore its capability to democratize AI access, thereby fostering broader global participation in AI development and utilization. In academia, the open-sourcing of models catalyzes more rigorous, diverse research endeavors by providing extensive access to foundational models and datasets.

Recommendations for the Future

Strategic recommendations are presented for fostering a responsible open-source GenAI ecosystem. These include enhancing data transparency, developing robust benchmarks for open evaluation, conducting in-depth security audits, and ongoing assessment of societal impacts. Advocating for an open-source model, the paper suggests these frameworks can mitigate risks while maximizing the technology's positive impacts.

Concluding Thoughts

The paper directs a well-reasoned call towards structured open sourcing of GenAI models in the near to mid-term. By delineating both the optimistic and cautious narratives surrounding open source models, it lays a balanced viewpoint advocating for responsible and strategically planned open sourcing methodologies.

In conclusion, while the paper offers a critical roadmap for navigating the complex terrain of open source GenAI, it equally calls for sustained empirical and theoretical inquiry to adaptively manage emerging challenges and opportunities in this rapidly evolving field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (170)
  1. GPT-4 technical report. arXiv preprint arXiv:2303.08774.
  2. AdeptTeam. 2022. ACT-1: Transformer for actions.
  3. Agencia de Gobierno. Mesa de diálogo “Inteligencia Artificial: oportunidades y desafíos de una estrategia nacional”. Agencia de Gobierno Electrónico y Sociedad de la Información y del Conocimiento.
  4. Do all languages cost the same? Tokenization in the era of commercial language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.
  5. Microsoft Research AI4Science and Microsoft Azure Quantum. 2023. The impact of large language models on scientific discovery: a preliminary study using GPT-4. arXiv preprint arXiv:2311.07361.
  6. Jide Alaga and Jonas Schuett. 2023. Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers. arXiv preprint arXiv:2310.00374.
  7. Fares Alahdab. 2023. Potential impact of large language models on academic writing. BMJ Evidence-Based Medicine.
  8. Amazon. 2023. AWS expands Amazon Bedrock with additional foundation models, new model provider, and advanced capability to help customers build generative AI applications.
  9. Palm 2 technical report. arXiv preprint arXiv:2305.10403.
  10. Anthropic. 2023. Anthropic’s responsible scaling policy.
  11. DICES Dataset: Diversity in conversational AI evaluation for safety. arXiv preprint arXiv:2306.11247.
  12. Asia Society. 2024. China’s Emerging Approach to Regulating General-Purpose Artificial Intelligence: Balancing Innovation and Control | Asia Society.
  13. AsuharietYgvar. 2021. AppleNeuralHash2ONNX: Reverse-engineered Apple NeuralHash, in ONNX and Python.
  14. Australian Government. 2024a. Australian Framework for Generative Artificial Intelligence (AI) in Schools.
  15. Australian Government. 2024b. Interim guidance on government use of public generative AI tools.
  16. The Belebele benchmark: a parallel reading comprehension dataset in 122 language variants. arXiv preprint arXiv:2308.16884.
  17. Safety-Tuned LLaMAs: Lessons from improving the safety of large language models that follow instructions. arXiv preprint arXiv:2309.07875.
  18. The impact of Open Source Software and Hardware on technological independence, competitiveness and innovation in the EU economy. European Commission, Brussels.
  19. On the Opportunities and Risks of Foundation Modelss. arXiv preprint arXiv:2108.07258.
  20. Considerations for governing open foundation models.
  21. Introducing the foundation model transparency index. arXiv preprint arXiv:2310.12941.
  22. Matt Bornstein and Rajko Radovanovic. 2023. Supporting the open source AI community. Andreessen Horowitz.
  23. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  24. Generative AI at work. Technical report, National Bureau of Economic Research.
  25. Matt Burgess. 2023. Criminals have created their own ChatGPT clones. Wired.
  26. Quantifying memorization across neural language models. In The Eleventh International Conference on Learning Representations.
  27. Black-box access is insufficient for rigorous AI audits. arXiv preprint arXiv:2401.14446.
  28. Visibility into AI agents.
  29. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419.
  30. Chatgpt’s one-year anniversary: Are open-source large language models catching up?
  31. Together Computer. 2023. RedPajama: an Open Dataset for Training Large Language Models.
  32. Grant Cooper. 2023. Examining science education in ChatGPT: An exploratory study of generative artificial intelligence. Journal of Science Education and Technology.
  33. Norwegian Consumer Council. 2023. Ghost in the machine: Addressing the consumer harms of generative ai. Norwegian Consumer Council, June.
  34. Joseph Cox. 2023. Facebook’s powerful large language model leaks online. Vice.
  35. Aron Culotta and Nicholas Mattei. 2023. Use open source for safer generative AI experiments. MIT Sloan Management Review.
  36. Anthony Cuthbertson. 2023. Elon Musk’s new AI bot will help you make cocaine which proves it’s ‘based’ and ‘rebellious’. The Independent.
  37. Cyberspace Administration of China. (translated) interim measures for the management of generative artificial intelligence services office of the central cybersecurity and information technology commission.
  38. Exploring large language models for multi-modal out-of-distribution detection. arXiv preprint arXiv:2310.08027.
  39. AI ethics principles.
  40. Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper.
  41. Multilingual jailbreak challenges in large language models. arXiv preprint arXiv:2310.06474.
  42. Assessing language model deployment with risk cards. arXiv preprint arXiv:2303.18190.
  43. Digital Government Authority. The Digital Government Authority issues free and open-source government software licenses to 6 government agencies.
  44. European Parliament. 2021. Artificial Intelligence Act.
  45. Friend or foe? Exploring the implications of large language models on the science system. arXiv preprint arXiv:2306.09928.
  46. Emilio Ferrara. 2023. GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. arXiv preprint arXiv:2310.00737.
  47. Joseph Saveri Law Firm and Matthew Butterick. LLM litigation.
  48. Carl Franzen. 2024. Mistral CEO confirms “leak”’ of new open source AI model nearing GPT-4 performance. VentureBeat.
  49. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
  50. A framework for few-shot language model evaluation.
  51. Saudi Gazette. 2024. SDAIA launches ALLAM AI application for Arabic chat.
  52. Datasheets for datasets. Communications of the ACM, 64(12):86–92.
  53. Kristalina Georgieva. 2024. AI will transform the global economy. Let’s make sure it benefits humanity.
  54. Trust in artificial intelligence: A global study.
  55. Roberto Gozalo-Brizuela and Eduardo C Garrido-Merchan. 2023. ChatGPT is not all you need. A State of the Art Review of large Generative AI models. arXiv preprint arXiv:2301.04655.
  56. Evaluating large language models: A comprehensive survey.
  57. Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.
  58. SoK: Memorization in general-purpose Large Language Models. arXiv preprint arXiv:2310.18362.
  59. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.
  60. An overview of catastrophic AI risks. arXiv preprint arXiv:2306.12001.
  61. Andreseen Horowitz. 2023. House of Lords Communications and Digital Select Committee inquiry: Large language models.
  62. The White House. 2023. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
  63. Llama Guard: LLM-based input-output safeguard for Human-AI conversations. arXiv preprint arXiv:2312.06674.
  64. Infocomm. First of its kind Generative AI Evaluation Sandbox for Trusted AI by AI Verify Foundation and IMDA. Infocomm Media Development Authority.
  65. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks. arXiv preprint arXiv:2311.12786.
  66. The impact of open source software on the strategic choices of firms developing proprietary software. Journal of Management Information Systems.
  67. The state and fate of linguistic diversity and inclusion in the NLP world. arXiv preprint arXiv:2004.09095.
  68. Oyvind Kaldestad. 2023. New report: Generative AI threatens. Forbrukerrådet.
  69. Ai regulation in india: Current state and future perspectives.
  70. Sayash Kapoor and Arvind Narayanan. 2023a. Licensing is neither feasible nor effective for addressing AI risks. AI Snake Oil.
  71. Sayash Kapoor and Arvind Narayanan. 2023b. Three ideas for regulating generative AI. AI Snake Oil.
  72. Copyright Violations and Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.
  73. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback.
  74. Will Knight. 2023. OpenAI’s CEO says the age of giant AI models is already over. Wired.
  75. Ajay Kumar and Tom Davenport. 2023. How to make generative ai greener. Harvard Business Review.
  76. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics.
  77. Language Models as a Service: Overview of a new paradigm and its challenges. arXiv preprint arXiv:2309.16573.
  78. Improving diversity of demographic representation in large language models via collective-critiques and self-voting. arXiv preprint arXiv:2310.16523.
  79. LAION.ai. 2023. A call to protect open source AI in europe. Accessed: 2024-01-29.
  80. Beyond static datasets: A deep interaction approach to LLM evaluation. arXiv preprint arXiv:2309.04369.
  81. ChatDoctor: A medical chat model fine-tuned on a large language model meta-ai (LLaMA) using medical domain knowledge. arXiv preprint arXiv:2303.14070.
  82. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110.
  83. Holistic evaluation of language models.
  84. Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators. In Proceedings of the 5th International Conference on Conversational user interfaces.
  85. Hila Lifshitz-Assaf and Frank Nagle. 2021. The digital economy runs on open source. here’s how to protect it. Harvard Business Review.
  86. LLM360: Towards fully transparent open-source LLMs. arXiv preprint arXiv:2312.06550.
  87. Alex Lockie. 2015. The wealthiest mafia in the world is undergoing a schism and it could get ugly. Business Insider.
  88. The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI. arXiv preprint arXiv:2310.16787.
  89. Exploring the transformative impact of generative AI on higher education. In Conference on e-Business, e-Services and e-Society.
  90. Matt Marshall. 2024. How enterprises are using Open Source LLMs: 16 examples. VentureBeat.
  91. AI systems of concern. arXiv preprint arXiv:2310.05876, abs/2310.05876.
  92. Mitigating covertly unsafe text within natural language systems. In Findings of the Association for Computational Linguistics: EMNLP 2022.
  93. ASSERT: Automated safety scenario red teaming for evaluating the robustness of large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023.
  94. Meta. 2023. Meta and Microsoft Introduce the Next Generation of Llama.
  95. Cade Metz. 2024. Openai says new york times lawsuit against it is “without merit”.
  96. MinCiencia. Artículo: Ministerio De Ciencia Abre Consulta Ciudadana Para Actualizar Política Nacional De Inteligencia Artificial.
  97. Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM.
  98. Ethan Mollick. 2023. An Opinionated Guide to Which AI to Use.
  99. Monetary Authority of Singapore. MAS Partners Industry to Develop Generative AI Risk Framework for the Financial Sector.
  100. Auditing large language models: a three-layered approach. AI and Ethics.
  101. Scalable extraction of training data from (production) language models. arXiv preprint arXiv:2311.17035.
  102. OECD. OECD’s live repository of AI strategies & policies.
  103. Courts of New Zealand. Guidelines for use of generative artificial intelligence in Courts and Tribunals — Courts of New Zealand.
  104. I know what you trained last summer: A survey on stealing machine learning models and defences. ACM Comput. Surv.
  105. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
  106. Stephanie Palazzolo. 2023. Meta’s free ai isn’t cheap to use, companies say.
  107. EU Parliament. 2023. EU AI Act. https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed: 2024-01-29.
  108. An empirical study of open-source and closed-source software products. IEEE Transactions on Software Engineering, 30(4):246–256.
  109. Language model tokenizers introduce unfairness between languages. Neural Information Processing Systems (NeurIPS).
  110. Prompting a pretrained transformer can be a universal approximator. arXiv preprint arXiv:2402.14753.
  111. Björn Plüster. Laion leolm: Linguistically enhanced open language model.
  112. Vinay Uday Prabhu and Abeba Birhane. 2020. Large image datasets: A pyrrhic win for computer vision?
  113. PricewaterhouseCoopers. 2024. Overview of ‘The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.
  114. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693.
  115. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
  116. Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
  117. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. arXiv preprint arXiv:2305.18290.
  118. Supporting human-ai collaboration in auditing llms with llms. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, page 913–926, New York, NY, USA. Association for Computing Machinery.
  119. Reuters. 2023. Abu Dhabi makes its Falcon 40B AI model open source. https://www.reuters.com/technology/abu-dhabi-makes-its-falcon-40b-ai-model-open-source-2023-05-25/.
  120. Risks and benefits of large language models for the environment. Environmental Science & Technology.
  121. Anna Rogers. 2023. Closed AI Models Make Bad Baselines. Accessed on January 31, 2024.
  122. Jules Roscoe. 2023. Elon Musk’s Grok AI is pushing misinformation and legitimizing conspiracies. Vice.
  123. XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models. arXiv preprint arXiv:2308.01263.
  124. Kate Saenko. 2023. A computer scientist breaks down generative AI’s hefty carbon footprint. Scientific American. https://www. scientificamerican. com/article/a-computer-scientist-breaks-down-generative-ais-hefty-carbon-footprint.
  125. Nlp evaluation in trouble: On the need to measure llm data contamination for each benchmark.
  126. C Sanchez. 2021. Civil society can help ensure ai benefits us all. here’s how. In World Economic Forum.
  127. David Sancho and Vincenzo Ciancaglini. 2023. Hype vs. reality: AI in the cybercriminal underground.
  128. Guido Schryen and Rouven Kadura. 2009. Open source vs. closed source software: towards measuring security. In Proceedings of the 2009 ACM Symposium on Applied Computing, pages 2016–2023.
  129. LAION-5B: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294.
  130. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
  131. Green ai.
  132. Open-Sourcing Highly Capable Foundation Models: An evaluation of risks, benefits, and alternative methods for pursuing open-source objectives. arXiv preprint arXiv:2311.09227.
  133. Matt Sheehan. 2023. China’s AI Regulations and How They Get Made.
  134. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. arXiv preprint arXiv:2210.05791.
  135. Model evaluation for extreme risks. arXiv preprint arXiv:2305.15324.
  136. Building open-source AI. Nature Computational Science.
  137. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv preprint arXiv:2201.11990.
  138. Irene Solaiman. 2023. The Gradient of Generative AI Release: Methods and Considerations.
  139. Evaluating the social impact of generative ai systems in systems and society. arXiv preprint arXiv:2306.05949.
  140. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
  141. Anna Gamvros Steven Chong, Edward Yau (HK). 2023. China finalises its Generative AI Regulation.
  142. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.
  143. Trustllm: Trustworthiness in large language models.
  144. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.
  145. The UK Government. 2023. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.
  146. TII. 2023. Falcon. https://falconllm.tii.ae/.
  147. LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971.
  148. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  149. China Law Translate. 2023. Interim Measures for the Management of Generative Artificial Intelligence Services.
  150. A. Tumadóttir. 2023. Supporting Open Source and Open Science in the EU AI Act. https://creativecommons.org/2023/07/26/supporting-open-source-and-open-science-in-the-eu-ai-act/. Accessed: 2024-01-29.
  151. UAE. 2023. UAE Strategy for Artificial Intelligence. https://u.ae/en/about-the-uae/strategies-initiatives-and-awards/strategies-plans-and-visions/government-services-and-digital-transformation/uae-strategy-for-artificial-intelligence.
  152. UK-gov. 2023. Safety and security risks of generative artificial intelligence to 2025.
  153. A systematic review of Green AI. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, page e1507.
  154. Pranshu Verma. 2023. They thought loved ones were calling for help. It was an AI scam.
  155. SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models. arXiv preprint arXiv:2311.08370.
  156. Georg Von Krogh and Sebastian Spaeth. 2007. The open source software phenomenon: Characteristics that promote research. The Journal of Strategic Information Systems, 16(3):236–253.
  157. Ben Wang and Aran Komatsuzaki. 2021. GPT-J-6B: A 6 billion parameter autoregressive language model. https://github.com/kingoflolz/mesh-transformer-jax.
  158. Sociotechnical Safety Evaluation of Generative AI Systems. arXiv preprint arXiv:2310.11986.
  159. Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082.
  160. Cleve R. Wootson. 2023. It’s time to stop laughing at Nigerian scammers – because they’re stealing billions of dollars. The Washington Post.
  161. Sustainable AI: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4:795–813.
  162. LLMDet: A third party large language models generated text detection tool. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2113–2133.
  163. A systematic evaluation of large language models of code. arXiv preprint arXiv:2202.13169.
  164. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
  165. Low-resource languages jailbreak GPT-4. arXiv preprint arXiv:2310.02446.
  166. GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher. arXiv preprint arXiv:2308.06463.
  167. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36.
  168. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–20.
  169. Don’t make your LLM an evaluation benchmark cheater. arXiv preprint arXiv:2311.01964.
  170. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (24)
  1. Francisco Eiras (17 papers)
  2. Aleksandar Petrov (21 papers)
  3. Bertie Vidgen (35 papers)
  4. Christian Schroeder de Witt (49 papers)
  5. Fabio Pizzati (22 papers)
  6. Katherine Elkins (5 papers)
  7. Supratik Mukhopadhyay (64 papers)
  8. Adel Bibi (53 papers)
  9. Botos Csaba (5 papers)
  10. Fabro Steibel (2 papers)
  11. Fazl Barez (42 papers)
  12. Genevieve Smith (7 papers)
  13. Gianluca Guadagni (6 papers)
  14. Jon Chun (6 papers)
  15. Jordi Cabot (32 papers)
  16. Joseph Marvin Imperial (28 papers)
  17. Juan A. Nolazco-Flores (2 papers)
  18. Lori Landay (2 papers)
  19. Matthew Jackson (6 papers)
  20. Paul Röttger (37 papers)
Citations (4)