Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models (2310.16570v2)

Published 25 Oct 2023 in cs.CL

Abstract: Pre-trained LLMs (PLMs) are trained on vast unlabeled data, rich in world knowledge. This fact has sparked the interest of the community in quantifying the amount of factual knowledge present in PLMs, as this explains their performance on downstream tasks, and potentially justifies their use as knowledge bases. In this work, we survey methods and datasets that are used to probe PLMs for factual knowledge. Our contributions are: (1) We propose a categorization scheme for factual probing methods that is based on how their inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of the datasets used for factual probing; (3) We synthesize insights about knowledge retention and prompt optimization in PLMs, analyze obstacles to adopting PLMs as knowledge bases and outline directions for future work.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (119)
  1. Position-based prompting for health outcome generation. In Proceedings of the 21st Workshop on Biomedical Language Processing, pages 26–36, Dublin, Ireland. Association for Computational Linguistics.
  2. Towards tracing knowledge in language models back to the training data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2429–2446, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  3. Probing pre-trained language models for disease knowledge. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3023–3033, Online. Association for Computational Linguistics.
  4. A peek into the memory of T5: Investigating the factual knowledge memory in a closed-book qa setting and finding responsible parts. Journal of Natural Language Processing, 29(3):762–784.
  5. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031.
  6. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics.
  7. Inducing relational knowledge from BERT. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7456–7463.
  8. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  9. Can prompt probe pretrained language models? understanding the invisible risks from a causal view. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5796–5808, Dublin, Ireland. Association for Computational Linguistics.
  10. The life cycle of knowledge in big language models: A survey. arXiv e-prints, pages arXiv–2303.
  11. Knowledgeable or educated guess? revisiting language models as knowledge bases. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1860–1874, Online. Association for Computational Linguistics.
  12. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 719–730, Dublin, Ireland. Association for Computational Linguistics.
  13. Pretrained language model embryology: The birth of ALBERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6813–6828, Online. Association for Computational Linguistics.
  14. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
  15. Perhaps PTLMs should go to school – a task to assess open book and closed book QA. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6104–6111, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  16. Salient span masking for temporal understanding. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3052–3060, Dubrovnik, Croatia. Association for Computational Linguistics.
  17. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8493–8502, Dublin, Ireland. Association for Computational Linguistics.
  18. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173–1178, Hong Kong, China. Association for Computational Linguistics.
  19. Editing factual knowledge in language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6491–6506, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  20. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
  21. Time-aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257–273.
  22. Calibrating factual knowledge in pretrained language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5937–5947, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  23. Static embeddings as efficient knowledge bases? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2353–2363, Online. Association for Computational Linguistics.
  24. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012–1031.
  25. Entities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4937–4951, Online. Association for Computational Linguistics.
  26. Prompt tuning or fine-tuning - investigating relational knowledge in pre-trained language models. In 3rd Conference on Automated Knowledge Base Construction.
  27. Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3046–3052, Dublin, Ireland. Association for Computational Linguistics.
  28. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL Student Research Workshop, pages 8–15, San Diego, California. Association for Computational Linguistics.
  29. REALM: Retrieval-augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML’20. JMLR.org.
  30. Pre-trained models: Past, present and future. AI Open, 2:225–250.
  31. EXAMS: A multi-subject high school examinations dataset for cross-lingual and multilingual question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5427–5444, Online. Association for Computational Linguistics.
  32. Methods for measuring, updating, and visualizing factual beliefs in language models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2714–2731, Dubrovnik, Croatia. Association for Computational Linguistics.
  33. BERTese: Learning to speak to BERT. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3618–3623, Online. Association for Computational Linguistics.
  34. Benjamin Heinzerling and Kentaro Inui. 2021. Language models as knowledge bases: On entity representations, storage capacity, and paraphrased queries. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1772–1791, Online. Association for Computational Linguistics.
  35. Detecting edit failures in large language models: An improved specificity benchmark. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11548–11559, Toronto, Canada. Association for Computational Linguistics.
  36. Understanding by understanding not: Modeling negation in language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1301–1312, Online. Association for Computational Linguistics.
  37. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, pages 2790–2799. PMLR.
  38. Evaluating the robustness of discrete prompts. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2373–2384, Dubrovnik, Croatia. Association for Computational Linguistics.
  39. TemporalWiki: A lifelong benchmark for training and evaluating ever-evolving language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6237–6250, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  40. Towards continual knowledge learning of language models. In International Conference on Learning Representations.
  41. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics.
  42. X-FACTR: Multilingual factual knowledge retrieval from pretrained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5943–5959, Online. Association for Computational Linguistics.
  43. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438.
  44. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421.
  45. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics.
  46. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4948–4961, Online. Association for Computational Linguistics.
  47. Simple and effective multi-token completion from masked language models. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2356–2369, Dubrovnik, Croatia. Association for Computational Linguistics.
  48. Jan-Christoph Kalo and Leandra Fichtel. 2022. KAMEL: Knowledge analysis with multitoken entities in language models. In Proceedings of the Conference on Automated Knowledge Base Construction.
  49. AMMUS : A survey of transformer-based pretrained models in natural language processing.
  50. Large language models struggle to learn long-tail knowledge. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 15696–15707. PMLR.
  51. Multilingual LAMA: Investigating knowledge in multilingual pretrained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3250–3258, Online. Association for Computational Linguistics.
  52. Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics.
  53. Amr Keleg and Walid Magdy. 2023. DLAMA: A framework for curating culturally diverse facts for probing the knowledge of pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6245–6266, Toronto, Canada. Association for Computational Linguistics.
  54. Sawan Kumar and Partha Talukdar. 2021. Reordering examples helps during priming-based few-shot learning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4507–4518, Online. Association for Computational Linguistics.
  55. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466.
  56. Plug-and-play adaptation for continuously-updated QA. In Findings of the Association for Computational Linguistics: ACL 2022, pages 438–447, Dublin, Ireland. Association for Computational Linguistics.
  57. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333–342, Vancouver, Canada. Association for Computational Linguistics.
  58. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics.
  59. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000–1008, Online. Association for Computational Linguistics.
  60. ElitePLM: An empirical study on general language ability evaluation of pretrained language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3519–3539, Seattle, United States. Association for Computational Linguistics.
  61. How pre-trained language models capture factual knowledge? a causal-inspired analysis. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1720–1732, Dublin, Ireland. Association for Computational Linguistics.
  62. SPE: Symmetrical prompt enhancement for fact probing. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11689–11698, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  63. Do language models know the way to Rome? In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 510–517, Punta Cana, Dominican Republic. Association for Computational Linguistics.
  64. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9).
  65. RoBERTa: A robustly optimized BERT pretraining approach.
  66. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052–7063, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  67. Coherence boosting: When your pretrained language model is not paying enough attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8214–8236, Dublin, Ireland. Association for Computational Linguistics.
  68. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822, Toronto, Canada. Association for Computational Linguistics.
  69. Dynamic benchmarking of masked language models on temporal concept drift with multiple views. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2881–2898, Dubrovnik, Croatia. Association for Computational Linguistics.
  70. Locating and editing factual associations in GPT. In Advances in Neural Information Processing Systems.
  71. Mass-editing memory in a transformer. In The Eleventh International Conference on Learning Representations.
  72. Rewire-then-probe: A contrastive recipe for probing biomedical knowledge of pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4798–4810, Dublin, Ireland. Association for Computational Linguistics.
  73. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  74. Nonparametric masked language modeling. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2097–2118, Toronto, Canada. Association for Computational Linguistics.
  75. Exploring BERT’s sensitivity to lexical cues using tests from semantic priming. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4625–4635, Online. Association for Computational Linguistics.
  76. P-Adapters: Robustly extracting factual information from language models with diverse prompts. In International Conference on Learning Representations.
  77. A corpus with multi-level annotations of patients, interventions and outcomes to support language processing for medical literature. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 197–207, Melbourne, Australia. Association for Computational Linguistics.
  78. Entity cloze by date: What LMs know about unseen entities. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 693–702, Seattle, United States. Association for Computational Linguistics.
  79. OpenAI. 2023. GPT-4 technical report.
  80. Lalchand Pandia and Allyson Ettinger. 2021. Sorting through the noise: Testing robustness of information processing in pre-trained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1583–1596, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  81. Trak: Attributing model behavior at scale. In International Conference on Machine Learning (ICML).
  82. The refinedweb dataset for Falcon LLM: Outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
  83. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
  84. How can the [MASK] know? the sources and limitations of knowledge in bert. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8.
  85. E-BERT: Efficient-yet-effective entity embeddings for BERT. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 803–818, Online. Association for Computational Linguistics.
  86. Guanghui Qin and Jason Eisner. 2021. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5203–5212, Online. Association for Computational Linguistics.
  87. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67.
  88. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784–789, Melbourne, Australia. Association for Computational Linguistics.
  89. Partha Pratim Ray. 2023. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3:121–154.
  90. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426, Online. Association for Computational Linguistics.
  91. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842–866.
  92. InforMask: Unsupervised informative masking for language model pretraining. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5866–5878, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  93. Mohammed Saeed and Paolo Papotti. 2022. You are my type! type embeddings for pre-trained language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4583–4598, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  94. Tara Safavi and Danai Koutra. 2021. Relational World Knowledge Representation in Contextual Language Models: A Review. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1053–1067, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  95. Simple entity-centric questions challenge dense retrievers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6138–6148, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  96. DESCGEN: A distantly supervised datasetfor generating entity descriptions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 415–427, Online. Association for Computational Linguistics.
  97. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222–4235, Online. Association for Computational Linguistics.
  98. Knowledge base construction from pre-trained language models 2022. In Semantic Web Challenge on Knowledge Base Construction from Pre-trained Language Models. CEUR-WS.
  99. Can language models be biomedical knowledge bases? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4723–4734, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  100. Wilson L. Taylor. 1953. “Cloze Procedure”: A new tool for measuring readability. Journalism Quarterly, 30(4):415–433.
  101. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representations.
  102. LLaMA: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  103. BERTnesia: Investigating the capture and forgetting of knowledge in BERT. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 174–183, Online. Association for Computational Linguistics.
  104. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3241–3251, Online. Association for Computational Linguistics.
  105. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405–1418, Online. Association for Computational Linguistics.
  106. Towards alleviating the object bias in prompt tuning-based factual knowledge extraction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4420–4432, Toronto, Canada. Association for Computational Linguistics.
  107. EntityCS: Improving zero-shot cross-lingual transfer with entity-centric code switching. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6698–6714, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  108. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. In International Conference on Learning Representations.
  109. ZeroGen: Efficient zero-shot learning via dataset generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11653–11669, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  110. On the influence of masking policies in intermediate pre-training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7190–7202, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  111. Hiyori Yoshikawa and Naoaki Okazaki. 2023. Selective-LAMA: Selective prediction for confidence-aware evaluation of language models. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2017–2028, Dubrovnik, Croatia. Association for Computational Linguistics.
  112. Improving biomedical pretrained language models with knowledge. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 180–190, Online. Association for Computational Linguistics.
  113. Michael Zhang and Eunsol Choi. 2021. SituatedQA: Incorporating extra-linguistic contexts into QA. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7371–7387, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  114. When do you need billions of words of pretraining data? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1112–1125, Online. Association for Computational Linguistics.
  115. PromptGen: Automatically generate prompts using generative models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 30–37, Seattle, United States. Association for Computational Linguistics.
  116. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 12697–12706. PMLR.
  117. Self-evolution learning for discriminative language model pretraining. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4130–4145, Toronto, Canada. Association for Computational Linguistics.
  118. Factual probing is [MASK]: Learning vs. learning to recall. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5017–5033, Online. Association for Computational Linguistics.
  119. Julia El Zini and Mariette Awad. 2022. On the explainability of natural language processing deep models. ACM Comput. Surv., 55(5).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Paul Youssef (13 papers)
  2. Osman Alperen Koraş (5 papers)
  3. Meijie Li (1 paper)
  4. Jörg Schlötterer (35 papers)
  5. Christin Seifert (46 papers)
Citations (15)