Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ConfusionPrompt: Practical Private Inference for Online Large Language Models (2401.00870v4)

Published 30 Dec 2023 in cs.CR and cs.AI

Abstract: State-of-the-art LLMs are typically deployed as online services, requiring users to transmit detailed prompts to cloud servers. This raises significant privacy concerns. In response, we introduce ConfusionPrompt, a novel framework for private LLM inference that protects user privacy by: (i) decomposing the original prompt into smaller sub-prompts, and (ii) generating pseudo-prompts alongside the genuine sub-prompts, which are then sent to the LLM. The server responses are later recomposed by the user to reconstruct the final output. This approach offers key advantages over previous LLM privacy protection methods: (i) it integrates seamlessly with existing black-box LLMs, and (ii) it delivers a significantly improved privacy-utility trade-off compared to existing text perturbation methods. We also develop a $(\lambda, \mu, \rho)$-privacy model to formulate the requirements for a privacy-preserving group of prompts and provide a complexity analysis to justify the role of prompt decomposition. Our empirical evaluation shows that ConfusionPrompt achieves significantly higher utility than local inference methods using open-source models and perturbation-based techniques, while also reducing memory consumption compared to open-source LLMs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Lloret-Talavera, G., Jorda, M., Servat, H., Boemer, F., Chauhan, C., Tomishima, S., Shah, N.N., Pena, A.J.: Enabling homomorphically encrypted inference for large DNN models. IEEE Transactions on Computers 71(5), 1145–1155 (2022) https://doi.org/10.1109/tc.2021.3076123 Liu and Liu [2023] Liu, X., Liu, Z.: LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers (2023) Li et al. [2023] Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Liu, X., Liu, Z.: LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers (2023) Li et al. [2023] Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  2. Liu, X., Liu, Z.: LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers (2023) Li et al. [2023] Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  3. Li, Y., Tan, Z., Liu, Y.: Privacy-Preserving Prompt Tuning for Large Language Model Services (2023) Lukas et al. [2023] Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  4. Lukas, N., Salem, A., Sim, R., Tople, S., Wutschitz, L., Zanella-Béguelin, S.: Analyzing leakage of personally identifiable information in language models. arXiv preprint arXiv:2302.00539 (2023) Duan et al. [2023] Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  5. Duan, H., Dziedzic, A., Papernot, N., Boenisch, F.: Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models (2023) Carlini et al. [2023] Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  6. Carlini, N., Ippolito, D., Jagielski, M., Lee, K., Tramer, F., Zhang, C.: Quantifying Memorization Across Neural Language Models (2023) Ippolito et al. [2023] Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  7. Ippolito, D., Tramèr, F., Nasr, M., Zhang, C., Jagielski, M., Lee, K., Choquette-Choo, C.A., Carlini, N.: Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy (2023) McCoy et al. [2021] McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  8. McCoy, R.T., Smolensky, P., Linzen, T., Gao, J., Celikyilmaz, A.: How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN (2021) Tirumala et al. [2022] Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  9. Tirumala, K., Markosyan, A., Zettlemoyer, L., Aghajanyan, A.: Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems 35, 38274–38290 (2022) Zhang et al. [2021] Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  10. Zhang, C., Ippolito, D., Lee, K., Jagielski, M., Tramèr, F., Carlini, N.: Counterfactual Memorization in Neural Language Models (2021) Mireshghallah et al. [2022] Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  11. Mireshghallah, F., Uniyal, A., Wang, T., Evans, D., Berg-Kirkpatrick, T.: Memorization in NLP Fine-tuning Methods (2022) Chen et al. [2022] Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  12. Chen, Y., Liu, Y., Dong, L., Wang, S., Zhu, C., Zeng, M., Zhang, Y.: AdaPrompt: Adaptive Model Training for Prompt-based NLP (2022) Zhang et al. [2022] Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  13. Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic Chain of Thought Prompting in Large Language Models (2022) Guo et al. [2023] Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  14. Guo, J., Li, J., Li, D., Tiong, A.M.H., Li, B., Tao, D., Hoi, S.C.H.: From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models (2023) Sordoni et al. [2023] Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  15. Sordoni, A., Yuan, X., Côté, M.-A., Pereira, M., Trischler, A., Xiao, Z., Hosseini, A., Niedtner, F., Roux, N.L.: Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference (2023) Qiao et al. [2023] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  16. Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., Tan, C., Huang, F., Chen, H.: Reasoning with Language Model Prompting: A Survey (2023) Perez et al. [2020] Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  17. Perez, E., Lewis, P., Yih, W.-t., Cho, K., Kiela, D.: Unsupervised Question Decomposition for Question Answering (2020) Huang and Chang [2023] Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  18. Huang, J., Chang, K.C.-C.: Towards Reasoning in Large Language Models: A Survey (2023) Zhou et al. [2023] Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  19. Zhou, D., Schärli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., Le, Q., Chi, E.: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models (2023) Khot et al. [2023] Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  20. Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K., Clark, P., Sabharwal, A.: Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  21. Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional Semantic Parsing with Large Language Models (2022) Ehrmann et al. [2021] Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  22. Ehrmann, M., Hamdi, A., Pontes, E.L., Romanello, M., Doucet, A.: Named entity recognition and classification in historical documents: A survey. ACM Computing Surveys (2021) Akbik et al. [2019] Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  23. Akbik, A., Bergmann, T., Blythe, D.A.J., Rasul, K., Schweter, S., Vollgraf, R.: Flair: An easy-to-use framework for state-of-the-art nlp. In: North American Chapter of the Association for Computational Linguistics (2019). https://api.semanticscholar.org/CorpusID:181704107 Srinivasa-Desikan [2018] Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  24. Srinivasa-Desikan, B.: Natural Language Processing and Computational Linguistics: A Practical Guide to Text Analysis with Python, Gensim, spaCy, and Keras. Packt Publishing Ltd, ??? (2018) Brown et al. [2022] Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  25. Brown, H., Lee, K., Mireshghallah, F., Shokri, R., Tramèr, F.: What Does it Mean for a Language Model to Preserve Privacy? (2022) Pilán et al. [2022] Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  26. Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., Batet, M.: The text anonymization benchmark (tab): A dedicated corpus and evaluation framework for text anonymization. Computational Linguistics 48(4), 1053–1101 (2022) Rasiel [1999] Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  27. Rasiel, E.M.: The McKinsey Way. McGraw-Hill New York, ??? (1999) Devlin et al. [2018] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  28. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Staatsbibliothek [2022] Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022) Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
  29. Staatsbibliothek, A.: bert-large-cased-finetuned-conll03-english. https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english (2022)
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ran Yan (21 papers)
  2. Peihua Mai (6 papers)
  3. Yan Pang (21 papers)
  4. Rui Ye (42 papers)
  5. Youjia Yang (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com