Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarking and Improving Compositional Generalization of Multi-aspect Controllable Text Generation (2404.04232v2)

Published 5 Apr 2024 in cs.CL

Abstract: Compositional generalization, representing the model's ability to generate text with new attribute combinations obtained by recombining single attributes from the training data, is a crucial property for multi-aspect controllable text generation (MCTG) methods. Nonetheless, a comprehensive compositional generalization evaluation benchmark of MCTG is still lacking. We propose CompMCTG, a benchmark encompassing diverse multi-aspect labeled datasets and a crafted three-dimensional evaluation protocol, to holistically evaluate the compositional generalization of MCTG approaches. We observe that existing MCTG works generally confront a noticeable performance drop in compositional testing. To mitigate this issue, we introduce Meta-MCTG, a training framework incorporating meta-learning, where we enable models to learn how to generalize by simulating compositional generalization scenarios in the training phase. We demonstrate the effectiveness of Meta-MCTG through achieving obvious improvement (by at most 3.64%) for compositional testing performance in 94.4% cases.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. Opener: Open polarity enhanced named entity recognition. Procesamiento de Lenguaje Natural, 51:215–218.
  2. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
  3. Compositional generalization for multi-label text classification: A data-augmentation approach.
  4. Measures of distance between probability distributions. Journal of Mathematical Analysis and Applications, 138(1):280–292.
  5. Meta-learning to compositionally generalize. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3322–3335, Online. Association for Computational Linguistics.
  6. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations.
  7. Compositional semantic parsing with large language models. In The Eleventh International Conference on Learning Representations.
  8. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104, Copenhagen, Denmark. Association for Computational Linguistics.
  9. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR.
  10. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
  11. A distributional lens for multi-aspect controllable text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1023–1043, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  12. Controllable text generation via probability density estimation in the latent space. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12590–12616, Toronto, Canada. Association for Computational Linguistics.
  13. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW ’16, page 507–517, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
  14. Jonathan Herzig and Jonathan Berant. 2021. Span-based semantic parsing for compositional generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 908–921, Online. Association for Computational Linguistics.
  15. An extensible plug-and-play method for multi-aspect controllable text generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15233–15256, Toronto, Canada. Association for Computational Linguistics.
  16. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 424–434, Florence, Italy. Association for Computational Linguistics.
  17. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
  18. Measuring compositional generalization: A comprehensive method on realistic data. In International Conference on Learning Representations.
  19. Improving compositional generalization in classification tasks via structure annotations. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 637–645, Online. Association for Computational Linguistics.
  20. Gedi: Generative discriminator guided sequence generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4929–4952.
  21. Multiple-attribute text rewriting. In International Conference on Learning Representations.
  22. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3045–3059, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  23. Learning to generalize: Meta-learning for domain generalization. In AAAI Conference on Artificial Intelligence.
  24. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics.
  25. Pan Li and Alexander Tuzhilin. 2019. Towards controllable and personalized review generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3237–3245, Hong Kong, China. Association for Computational Linguistics.
  26. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582–4597, Online. Association for Computational Linguistics.
  27. On compositional generalization of neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4767–4780, Online. Association for Computational Linguistics.
  28. Understanding and patching compositional reasoning in llms.
  29. Learning to substitute spans towards improving compositional generalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2791–2811, Toronto, Canada. Association for Computational Linguistics.
  30. Learning to compose representations of different encoder layers towards improving compositional generalization.
  31. Multi-attribute controlled text generation with contrastive-generator and external-discriminator. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5904–5913, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
  32. Roberta: A robustly optimized bert pretraining approach.
  33. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
  34. Mix and match: Learning-free controllable text generationusing energy language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 401–415, Dublin, Ireland. Association for Computational Linguistics.
  35. Making transformers solve compositional tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3591–3607, Dublin, Ireland. Association for Computational Linguistics.
  36. OpenAI. 2023. ChatGPT — openai.com. https://openai.com/chatgpt.
  37. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5687–5711, Singapore. Association for Computational Linguistics.
  38. Controllable natural language generation with contrastive prefixes. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2912–2924, Dublin, Ireland. Association for Computational Linguistics.
  39. Cold decoding: Energy-based constrained text generation with langevin dynamics. In Advances in Neural Information Processing Systems, volume 35, pages 9538–9551. Curran Associates, Inc.
  40. Improving language understanding by generative pre-training.
  41. Language models are unsupervised multitask learners.
  42. Herbert E. Robbins. 1955. A remark on stirling’s formula. American Mathematical Monthly, 62:402–405.
  43. Stuart Russell and Peter Norvig. 2010. Artificial Intelligence: A Modern Approach, 3 edition. Prentice Hall.
  44. Control, generate, augment: A scalable framework for multi-attribute text generation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 351–366, Online. Association for Computational Linguistics.
  45. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
  46. Evaluating large language models on controlled generation tasks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3155–3168, Singapore. Association for Computational Linguistics.
  47. Llama 2: Open foundation and fine-tuned chat models.
  48. Sentube: A corpus for sentiment analysis on youtube social media. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland. European Language Resources Association (ELRA).
  49. Meta-learning for domain generalization in semantic parsing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 366–379, Online. Association for Computational Linguistics.
  50. Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online. Association for Computational Linguistics.
  51. Tailor: A soft-prompt-based approach to attribute-based controlled text generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 410–427, Toronto, Canada. Association for Computational Linguistics.
  52. YELP. 2014. Yelp dataset. https://www.yelp.com/dataset/challenge.
  53. Seen to unseen: Exploring compositional generalization of multi-attribute controllable dialogue generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14179–14196, Toronto, Canada. Association for Computational Linguistics.
  54. Hanqing Zhang and Dawei Song. 2022. DisCup: Discriminator cooperative unlikelihood prompt-tuning for controllable text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3392–3406, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  55. Hao Zheng and Mirella Lapata. 2022. Disentangled sequence to sequence learning for compositional generalization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4256–4268, Dublin, Ireland. Association for Computational Linguistics.
  56. Air-decoding: Attribute distribution reconstruction for decoding-time controllable text generation. In The 2023 Conference on Empirical Methods in Natural Language Processing.
  57. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations.
  58. Controlled text generation with natural language instructions. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 42602–42613. PMLR.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com