Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 152 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 26 tok/s Pro
2000 character limit reached

What is it for a Machine Learning Model to Have a Capability? (2405.08989v1)

Published 14 May 2024 in cs.AI, cs.CL, and cs.LG

Abstract: What can contemporary ML models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? And what sorts of evidence bear upon this question? In this paper, we aim to answer these questions, using the capabilities of LLMs as a running example. Drawing on the large philosophical literature on abilities, we develop an account of ML models' capabilities which can be usefully applied to the nascent science of model evaluation. Our core proposal is a conditional analysis of model abilities (CAMA): crudely, a machine learning model has a capability to X just when it would reliably succeed at doing X if it 'tried'. The main contribution of the paper is making this proposal precise in the context of ML, resulting in an operationalisation of CAMA applicable to LLMs. We then put CAMA to work, showing that it can help make sense of various features of ML model evaluation practice, as well as suggest procedures for performing fair inter-model comparisons.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (117)
  1. D. M. Armstrong. Acting and Trying. Philosophical Papers, 2(1):1–15, 1973. doi: 10.1080/05568647309506426.
  2. Bruce Aune. Abilities, modalities, and free will. Philosophy and Phenomenological Research, 23(March):397–413, 1963. Publisher: International Phenomenological Society.
  3. J. L. Austin. Ifs and cans. In Proceedings of the British Academy, vol. 42, pages 109–132. 1956.
  4. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5185–5198, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.463. URL https://aclanthology.org/2020.acl-main.463.
  5. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, pages 610–623, New York, NY, USA, March 2021. Association for Computing Machinery. ISBN 978-1-4503-8309-7. doi: 10.1145/3442188.3445922. URL https://dl.acm.org/doi/10.1145/3442188.3445922.
  6. Thor Benson. This Disinformation Is Just for You. Wired, 2023. ISSN 1059-1028. URL https://www.wired.com/story/generative-ai-custom-disinformation/. Section: tags.
  7. On the Opportunities and Risks of Foundation Models, July 2022. URL http://arxiv.org/abs/2108.07258. arXiv:2108.07258 [cs].
  8. Language Models are Few-Shot Learners, July 2020. URL http://arxiv.org/abs/2005.14165. arXiv:2005.14165 [cs].
  9. Sparks of Artificial General Intelligence: Early experiments with GPT-4, April 2023. URL http://arxiv.org/abs/2303.12712. arXiv:2303.12712 [cs].
  10. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, pages 77–91. PMLR, January 2018. URL https://proceedings.mlr.press/v81/buolamwini18a.html. ISSN: 2640-3498.
  11. Matt Burgess. The AI-Generated Child Abuse Nightmare Is Here. Wired, 2023. ISSN 1059-1028. URL https://www.wired.com/story/generative-ai-images-child-sexual-abuse/. Section: tags.
  12. Rethink reporting of evaluation results in AI. Science, 380(6641):136–138, April 2023. doi: 10.1126/science.adf6369. URL https://www.science.org/doi/10.1126/science.adf6369.
  13. Patrick Butlin. Reinforcement learning and artificial agency. Mind & Language, n/a(n/a), 2023. ISSN 1468-0017. doi: 10.1111/mila.12458. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12458. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/mila.12458.
  14. ChatGPT is an Agent.
  15. Quantifying Memorization Across Neural Language Models, February 2022. URL https://arxiv.org/abs/2202.07646v3.
  16. How is ChatGPT’s behavior changing over time?, August 2023. URL http://arxiv.org/abs/2307.09009. arXiv:2307.09009 [cs].
  17. Roderick M. Chisholm. J. L. Austin’s philosophical papers. Mind, 73(289):1–26, 1964. Publisher: Oxford University Press.
  18. François Chollet. On the Measure of Intelligence, November 2019. URL http://arxiv.org/abs/1911.01547. arXiv:1911.01547 [cs].
  19. Noam Chomsky. Aspects of the theory of syntax. Aspects of the theory of syntax. M.I.T. Press, Oxford, England, 1965.
  20. Deep Reinforcement Learning from Human Preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://papers.nips.cc/paper_files/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html.
  21. Randolph Clarke. Abilities to Act. Philosophy Compass, 10(12):893–904, 2015. ISSN 1747-9991. doi: 10.1111/phc3.12299. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12299. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/phc3.12299.
  22. Construct validity in psychological tests. Psychological Bulletin, 52(4):281–302, 1955. ISSN 1939-1455. doi: 10.1037/h0040957. Place: US Publisher: American Psychological Association.
  23. Donald Davidson. Actions, Reasons, and Causes. Journal of Philosophy, 60(23):685, 1963. Publisher: Journal of Philosophy.
  24. Daniel Clement Dennett. The Intentional Stance. MIT Press, 1981.
  25. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, May 2019. URL http://arxiv.org/abs/1810.04805. arXiv:1810.04805 [cs].
  26. Michael Fara. Dispositions and habituals. Noûs, 39(1):43–82, 2005. Publisher: Blackwell Publishing.
  27. Michael Fara. Masked Abilities and Compatibilism. Mind, 117(468):843–865, 2008. ISSN 0026-4423. URL https://www.jstor.org/stable/20532698. Publisher: [Oxford University Press, Mind Association].
  28. Chaz Firestone. Performance vs. competence in human–machine comparisons. Proceedings of the National Academy of Sciences, 117(43):26562–26571, October 2020. doi: 10.1073/pnas.1905334117. URL https://www.pnas.org/doi/10.1073/pnas.1905334117. Publisher: Proceedings of the National Academy of Sciences.
  29. Experimentology, October 2023. URL https://experimentology.io/.
  30. Causal Abstraction for Faithful Model Interpretation, January 2023. URL http://arxiv.org/abs/2301.04709. arXiv:2301.04709 [cs].
  31. John Greco. The Nature of Ability and the Purpose of Knowledge. Philosophical Issues, 17(1):57–69, 2007. doi: 10.1111/j.1533-6077.2007.00122.x.
  32. John Greco. Knowledge and Success From Ability. Philosophical Studies, 142(1):17–26, 2009. doi: 10.1007/s11098-008-9307-0. Publisher: Springer.
  33. What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks, September 2023. URL http://arxiv.org/abs/2305.18365. arXiv:2305.18365 [cs].
  34. Regulating ChatGPT and other Large Generative AI Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’23, pages 1112–1123, New York, NY, USA, June 2023. Association for Computing Machinery. ISBN 9798400701924. doi: 10.1145/3593013.3594067. URL https://dl.acm.org/doi/10.1145/3593013.3594067.
  35. Jacqueline Harding. Operationalising Representation in Natural Language Processing. The British Journal for the Philosophy of Science, November 2023. ISSN 0007-0882. doi: 10.1086/728685. URL https://www.journals.uchicago.edu/doi/10.1086/728685. Publisher: The University of Chicago Press.
  36. Julian Hazell. Large language models can be used to effectively scale spear phishing campaigns. arXiv preprint arXiv:2305.06972, 2023.
  37. Measuring Massive Multitask Language Understanding, January 2021. URL http://arxiv.org/abs/2009.03300. arXiv:2009.03300 [cs].
  38. White House. Ensuring Safe, Secure, and Trustworthy AI, July 2023. URL https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf.
  39. Thomas Hurka. The Parallel Goods of Knowledge and Achievement. Erkenntnis, 85(3):589–608, 2020. doi: 10.1007/s10670-020-00245-0. Publisher: Springer Verlag.
  40. Anna A. Ivanova. Running cognitive evaluations on large language models: The do’s and the don’ts, December 2023. URL http://arxiv.org/abs/2312.01276. arXiv:2312.01276 [cs].
  41. Consistency Analysis of ChatGPT, November 2023. URL http://arxiv.org/abs/2303.06273. arXiv:2303.06273 [cs].
  42. Drug discovery with explainable artificial intelligence. Nature Machine Intelligence, 2(10):573–584, October 2020. ISSN 2522-5839. doi: 10.1038/s42256-020-00236-4. URL https://www.nature.com/articles/s42256-020-00236-4. Number: 10 Publisher: Nature Publishing Group.
  43. Artificial intelligence in drug discovery: recent advances and future perspectives. Expert Opinion on Drug Discovery, 16(9):949–959, September 2021. ISSN 1746-045X. doi: 10.1080/17460441.2021.1909567.
  44. Michael T. Kane. An argument-based approach to validity. Psychological Bulletin, 112(3):527–535, 1992. ISSN 1939-1455. doi: 10.1037/0033-2909.112.3.527. Place: US Publisher: American Psychological Association.
  45. GPT-4 Passes the Bar Exam, March 2023. URL https://papers.ssrn.com/abstract=4389233.
  46. GeomVerse: A Systematic Evaluation of Large Models for Geometric Reasoning, December 2023. URL http://arxiv.org/abs/2312.12241. arXiv:2312.12241 [cs].
  47. Dynabench: Rethinking Benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110–4124, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.324. URL https://aclanthology.org/2021.naacl-main.324.
  48. Simon Kittle. The Conditional Analysis of the Agentive Modals: a Reply to Mandelkern et al. Philosophia, 51(4):2117–2138, 2023.
  49. Angelika Kratzer. What ’must’ and ’can’ must and can mean. Linguistics and Philosophy, 1(3):337–355, 1977. Publisher: Springer.
  50. Causal Reasoning and Large Language Models: Opening a New Frontier for Causality, May 2023. URL http://arxiv.org/abs/2305.00050. arXiv:2305.00050 [cs, stat].
  51. Andrew Kyle Lampinen. Can language models handle recursively nested grammatical structures? A case study on comparing models and humans, February 2023. URL http://arxiv.org/abs/2210.15303. arXiv:2210.15303 [cs].
  52. Supervised Pretraining Can Learn In-Context Reinforcement Learning, 2023. _eprint: 2306.14892.
  53. Keith Lehrer. Cans without Ifs. Analysis, 29(1):29 – 32, 1968. Publisher: Oxford University Press.
  54. David K. Lewis. The Paradoxes of Time Travel. American Philosophical Quarterly, 13(2):145–152, 1976. Publisher: University of Illinois Press.
  55. Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task, February 2023. URL http://arxiv.org/abs/2210.13382. arXiv:2210.13382 [cs].
  56. Holistic Evaluation of Language Models, November 2022. URL http://arxiv.org/abs/2211.09110. arXiv:2211.09110 [cs].
  57. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55(9):195:1–195:35, January 2023a. ISSN 0360-0300. doi: 10.1145/3560815. URL https://dl.acm.org/doi/10.1145/3560815.
  58. Prompt Injection attack against LLM-integrated Applications, June 2023b. URL http://arxiv.org/abs/2306.05499. arXiv:2306.05499 [cs].
  59. Large Language Models and Biorisk. American Journal of Bioethics, 2023.
  60. John Maier. The Agentive Modalities. Philosophy and Phenomenological Research, 87(3):113–134, 2013.
  61. Do Language Models Refer?, August 2023. URL https://arxiv.org/abs/2308.05576v1.
  62. Agentive Modals. The Philosophical Review, 126(3):301–343, July 2017. ISSN 0031-8108. doi: 10.1215/00318108-3878483. URL https://doi.org/10.1215/00318108-3878483.
  63. Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve, September 2023. URL http://arxiv.org/abs/2309.13638. arXiv:2309.13638 [cs].
  64. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1334. URL https://aclanthology.org/P19-1334.
  65. Alfred R Mele. Deciding to act. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 100(1):81–108, 2000a. Publisher: JSTOR.
  66. Alfred R. Mele. Goal-Directed Action: Teleological Explanations, Causal Theories, and Deviance. Noûs, 34(s14):279 – 300, 2000b. Publisher: Blackwell.
  67. Meta. CICERO: AI That Can Collaborate and Negotiate With You, November 2022. URL https://about.fb.com/news/2022/11/cicero-ai-that-can-collaborate-and-negotiate-with-you/.
  68. Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science, 378(6624):1067–1074, December 2022. doi: 10.1126/science.ade9097. URL https://www.science.org/doi/10.1126/science.ade9097. Publisher: American Association for the Advancement of Science.
  69. Ruth Garrett Millikan. Language, Thought, and Other Biological Categories: New Foundations for Realism, volume 14. MIT Press, 1984. Issue: 1 Pages: 51-56.
  70. Large Language Models as General Pattern Machines, July 2023. URL http://arxiv.org/abs/2307.04721. arXiv:2307.04721 [cs].
  71. The Vector Grounding Problem, April 2023. URL http://arxiv.org/abs/2304.01481. arXiv:2304.01481 [cs].
  72. Evaluating Cognitive Maps and Planning in Large Language Models with CogEval, September 2023. URL http://arxiv.org/abs/2309.15129. arXiv:2309.15129 [cs].
  73. Distinguishing cause from effect using observational data: methods and benchmarks. The Journal of Machine Learning Research, 17(1):1103–1204, January 2016. ISSN 1532-4435.
  74. G. E. Moore. Ethics. Oxford University Press, 1910.
  75. G. E. Moore. The Nature of Moral Philosophy. In Philosophical Papers. Routledge and Kegan Paul, 1922.
  76. The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain, May 2023. URL http://arxiv.org/abs/2305.07141. arXiv:2305.07141 [cs].
  77. Office of the Cybersecurity and Information Technology Commission of the CPC Central Committee. Notice of the Cyberspace Administration of China on the Public Solicitation of Comments on the ”Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comment)”. Technical report, April 2023. URL http://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm.
  78. OpenAI. GPT-4 Technical Report, March 2023. URL http://arxiv.org/abs/2303.08774. arXiv:2303.08774 [cs].
  79. Generative Agents: Interactive Simulacra of Human Behavior, August 2023. URL http://arxiv.org/abs/2304.03442. arXiv:2304.03442 [cs].
  80. Ellie Pavlick. Symbols and grounding in large language models. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 381(2251):20220041, June 2023. doi: 10.1098/rsta.2022.0041. URL https://royalsocietypublishing.org/doi/10.1098/rsta.2022.0041. Publisher: Royal Society.
  81. AI-assisted coding: Experiments with GPT-4. arXiv preprint arXiv:2304.13187, 2023.
  82. Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!, October 2023. URL http://arxiv.org/abs/2310.03693. arXiv:2310.03693 [cs].
  83. Learning Transferable Visual Models From Natural Language Supervision, February 2021. URL http://arxiv.org/abs/2103.00020. arXiv:2103.00020 [cs].
  84. AI and the Everything in the Whole Wide World Benchmark. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 1, December 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/084b6fbb10729ed4da8c3d3f5a3ae7c9-Abstract-round2.html.
  85. The Fallacy of AI Functionality. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 959–972, Seoul Republic of Korea, June 2022. ACM. ISBN 978-1-4503-9352-2. doi: 10.1145/3531146.3533158. URL https://dl.acm.org/doi/10.1145/3531146.3533158.
  86. Scott Rosenberg. Bing chatbot’s freakouts show AI’s wild side, February 2023. URL https://www.axios.com/2023/02/16/bing-artificial-intelligence-chatbot-issues.
  87. Mark Schmuckler. What Is Ecological Validity? A Dimensional Analysis. Infancy, 2, October 2001. doi: 10.1207/S15327078IN0204˙02.
  88. Proximal Policy Optimization Algorithms, August 2017. URL http://arxiv.org/abs/1707.06347. arXiv:1707.06347 [cs].
  89. Wolfgang Schwarz. Ability and Possibility. Philosophers’ Imprint, 20, 2020.
  90. Secretary of State for Science, Innovation and Technology. A pro-innovation approach to AI regulation. Technical report, Department for Science, Innovation and Technology, March 2023. URL https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1176103/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf.
  91. Scott Sehon. Teleological Explanation. In A Companion to the Philosophy of Action, pages 121–128. John Wiley & Sons, Ltd, 2010. ISBN 9781444323528. doi: 10.1002/9781444323528.ch16. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/9781444323528.ch16.
  92. Scott Robert Sehon. Teleological Realism: Mind, Agency, and Explanation, volume 57. Bradford Book/MIT Press, 2005. Issue: 228 Pages: 501-503.
  93. Murray Shanahan. Talking About Large Language Models, February 2023. URL http://arxiv.org/abs/2212.03551. arXiv:2212.03551 [cs].
  94. Nathaniel Sharadin. Reasons Wrong and Right. Pacific Philosophical Quarterly, 97(3):371–399, 2016. doi: 10.1111/papq.12089.
  95. Toby Shevlane. Structured access: an emerging paradigm for safe AI deployment, April 2022. URL http://arxiv.org/abs/2201.05159. arXiv:2201.05159 [cs].
  96. Model evaluation for extreme risks, May 2023. URL http://arxiv.org/abs/2305.15324. arXiv:2305.15324 [cs].
  97. KNN-Diffusion: Image Generation via Large-Scale Retrieval, 2022. _eprint: 2204.02849.
  98. Mastering the game of Go without human knowledge. Nature, 550(7676):354–359, October 2017. ISSN 1476-4687. doi: 10.1038/nature24270. URL https://www.nature.com/articles/nature24270. Number: 7676 Publisher: Nature Publishing Group.
  99. P. H. Nowell Smith. Ifs and Cans. Theoria, 26(2):85–101, 1960. Publisher: Philosophy Department, Stockholm University.
  100. Ernest Sosa. How Competence Matters in Epistemology. Philosophical Perspectives, 24:465–475, 2010. ISSN 1520-8583. URL https://www.jstor.org/stable/41329454. Publisher: [Ridgeview Publishing Company, Wiley].
  101. Learning to summarize from human feedback, September 2020. URL https://arxiv.org/abs/2009.01325v3.
  102. Human-Timescale Adaptation in an Open-Ended Task Space, January 2023. URL http://arxiv.org/abs/2301.07608. arXiv:2301.07608 [cs].
  103. Lisa Miracchi Titus. Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cognitive Systems Research, 83:101174, January 2024. ISSN 1389-0417. doi: 10.1016/j.cogsys.2023.101174. URL https://www.sciencedirect.com/science/article/pii/S1389041723001080.
  104. Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence, 4(3):189–191, March 2022. ISSN 2522-5839. doi: 10.1038/s42256-022-00465-9. URL https://www.nature.com/articles/s42256-022-00465-9. Number: 3 Publisher: Nature Publishing Group.
  105. Barbara Vetter. Are abilities dispositions? Synthese, 196(1):201–220, January 2019. ISSN 1573-0964. doi: 10.1007/s11229-016-1152-7. URL https://doi.org/10.1007/s11229-016-1152-7.
  106. Kadri Vihvelin. Free Will Demystified: A Dispositional Account. Philosophical Topics, 32(1/2):427–450, 2004. ISSN 0276-2080. URL https://www.jstor.org/stable/43154446. Publisher: University of Arkansas Press.
  107. Kadri Vihvelin. Causes, Laws, and Free Will: Why Determinism Doesn’t Matter. Oxford University Press, June 2013. ISBN 978-0-19-979518-5. doi: 10.1093/acprof:oso/9780199795185.003.0007. URL https://doi.org/10.1093/acprof:oso/9780199795185.003.0007.
  108. James Vincent. The swagged-out pope is an AI fake — and an early glimpse of a new reality, March 2023. URL https://www.theverge.com/2023/3/27/23657927/ai-pope-image-fake-midjourney-computer-generated-aesthetic.
  109. Voyager: An Open-Ended Embodied Agent with Large Language Models, May 2023. URL http://arxiv.org/abs/2305.16291. arXiv:2305.16291 [cs].
  110. Self-Consistency Improves Chain of Thought Reasoning in Language Models, March 2022. URL https://arxiv.org/abs/2203.11171v4.
  111. Jailbroken: How Does LLM Safety Training Fail?, July 2023a. URL http://arxiv.org/abs/2307.02483. arXiv:2307.02483 [cs].
  112. Emergent Abilities of Large Language Models, June 2022a. URL https://arxiv.org/abs/2206.07682v2.
  113. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, January 2022b. URL https://arxiv.org/abs/2201.11903v6.
  114. Larger language models do in-context learning differently, March 2023b. URL http://arxiv.org/abs/2303.03846. arXiv:2303.03846 [cs].
  115. James F. Woodward. Making things happen: a theory of causal explanation. Oxford University Press, 2003. Issue: 1 Pages: 233-249.
  116. An Explanation of In-context Learning as Implicit Bayesian Inference, November 2021. URL https://arxiv.org/abs/2111.02080v6.
  117. Universal and Transferable Adversarial Attacks on Aligned Language Models, July 2023. URL http://arxiv.org/abs/2307.15043. arXiv:2307.15043 [cs].
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: