Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 167 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 42 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

NeSy is alive and well: A LLM-driven symbolic approach for better code comment data generation and classification (2402.16910v2)

Published 25 Feb 2024 in cs.SE and cs.AI

Abstract: We present a neuro-symbolic (NeSy) workflow combining a symbolic-based learning technique with a LLM agent to generate synthetic data for code comment classification in the C programming language. We also show how generating controlled synthetic data using this workflow fixes some of the notable weaknesses of LLM-based generation and increases the performance of classical machine learning models on the code comment classification task. Our best model, a Neural Network, achieves a Macro-F1 score of 91.412% with an increase of 1.033% after data augmentation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. Zheng, Z., Ning, K., Wang, Y., Zhang, J., Zheng, D., Ye, M., Chen, J.: A survey of large language models for code: Evolution, benchmarking, and future trends. arXiv preprint arXiv:2311.10372 (2023) Gholami and Omar [2023] Gholami, S., Omar, M.: Does synthetic data make large language models more efficient? arXiv preprint arXiv:2310.07830 (2023) Muennighoff et al. [2024] Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Gholami, S., Omar, M.: Does synthetic data make large language models more efficient? arXiv preprint arXiv:2310.07830 (2023) Muennighoff et al. [2024] Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  2. Gholami, S., Omar, M.: Does synthetic data make large language models more efficient? arXiv preprint arXiv:2310.07830 (2023) Muennighoff et al. [2024] Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  3. Muennighoff, N., Rush, A., Barak, B., Le Scao, T., Tazi, N., Piktus, A., Pyysalo, S., Wolf, T., Raffel, C.A.: Scaling data-constrained language models. Advances in Neural Information Processing Systems 36 (2024) Van [2023] Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  4. Van, H.: Mitigating data scarcity for large language models. arXiv preprint arXiv:2302.01806 (2023) Majumdar et al. [2023] Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  5. Majumdar, S., Paul, S., Paul, D., Bandyopadhyay, A., Chattopadhyay, S., Das, P.P., Clough, P.D., Majumder, P.: Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023. arXiv preprint arXiv:2311.03374 (2023) Abi Akl [2023] Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  6. Abi Akl, H.: A ml-llm pairing for better code comment classification. In: FIRE (Forum for Information Retrieval Evaluation) 2023 (2023) d’Avila Garcez and Lamb [2020] Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  7. Garcez, A., Lamb, L.C.: Neurosymbolic ai: the 3rd wave. arXiv e-prints, 2012 (2020) Núñez-Molina et al. [2023] Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  8. Núñez-Molina, C., Mesejo, P., Fernández-Olivares, J.: Nesig: A neuro-symbolic method for learning to generate planning problems. arXiv preprint arXiv:2301.10280 (2023) Karth et al. [2021] Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  9. Karth, I., Aytemiz, B., Mawhorter, R., Smith, A.M.: Neurosymbolic map generation with vq-vae and wfc. In: Proceedings of the 16th International Conference on the Foundations of Digital Games, pp. 1–6 (2021) Prasad et al. [2023] Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  10. Prasad, A., Koller, A., Hartmann, M., Clark, P., Sabharwal, A., Bansal, M., Khot, T.: Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772 (2023) Hou et al. [2023] Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  11. Hou, B., Liu, Y., Qian, K., Andreas, J., Chang, S., Zhang, Y.: Decomposing uncertainty for large language models through input clarification ensembling. arXiv preprint arXiv:2311.08718 (2023) Tarasov and Shridhar [2024] Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  12. Tarasov, D., Shridhar, K.: Distilling llms’ decomposition abilities into compact language models. arXiv preprint arXiv:2402.01812 (2024) Lyre [2024] Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  13. Lyre, H.: ” understanding ai”: Semantic grounding in large language models. arXiv preprint arXiv:2402.10992 (2024) Turney [2014] Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  14. Turney, P.D.: Semantic composition and decomposition: From recognition to generation. arXiv preprint arXiv:1405.7908 (2014) Bloore et al. [2022] Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  15. Bloore, D.A., Gauriau, R., Decker, A.L., Oppenheim, J.: Semantic decomposition improves learning of large language models on ehr data. arXiv preprint arXiv:2212.06040 (2022) Jhamtani et al. [2023] Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  16. Jhamtani, H., Fang, H., Xia, P., Levy, E., Andreas, J., Van Durme, B.: Natural language decomposition and interpretation of complex utterances. arXiv preprint arXiv:2305.08677 (2023) Drozdov et al. [2022] Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  17. Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, X., Chen, X., Bousquet, O., Zhou, D.: Compositional semantic parsing with large language models. arXiv preprint arXiv:2209.15003 (2022) Patel et al. [2022] Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  18. Patel, P., Mishra, S., Parmar, M., Baral, C.: Is a question decomposition unit all we need? arXiv preprint arXiv:2205.12538 (2022) Mekala et al. [2022] Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  19. Mekala, D., Wolfe, J., Roy, S.: Zerotop: Zero-shot task-oriented semantic parsing using large language models. arXiv preprint arXiv:2212.10815 (2022) Yang et al. [2022] Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  20. Yang, J., Jiang, H., Yin, Q., Zhang, D., Yin, B., Yang, D.: Seqzero: Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv preprint arXiv:2205.07381 (2022) Lu et al. [2023] Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  21. Lu, Y., Shen, M., Wang, H., Wang, X., Rechem, C., Wei, W.: Machine learning for synthetic data generation: a review. arXiv preprint arXiv:2302.04062 (2023) Bauer et al. [2024] Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  22. Bauer, A., Trapp, S., Stenger, M., Leppich, R., Kounev, S., Leznik, M., Chard, K., Foster, I.: Comprehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 (2024) Li et al. [2023] Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  23. Li, Z., Zhu, H., Lu, Z., Yin, M.: Synthetic data generation with large language models for text classification: Potential and limitations. arXiv preprint arXiv:2310.07849 (2023) Riemer [2015] Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  24. Riemer, N.: The Routledge Handbook of Semantics, (2015) Klemens [2014] Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  25. Klemens, B.: 21st Century C: C Tips from the New School, (2014) Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002) Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)
  26. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357 (2002)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: