Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 231 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4 33 tok/s Pro
2000 character limit reached

Can AI Write Classical Chinese Poetry like Humans? An Empirical Study Inspired by Turing Test (2401.04952v1)

Published 10 Jan 2024 in cs.CL

Abstract: Some argue that the essence of humanity, such as creativity and sentiment, can never be mimicked by machines. This paper casts doubt on this belief by studying a vital question: Can AI compose poetry as well as humans? To answer the question, we propose ProFTAP, a novel evaluation framework inspired by Turing test to assess AI's poetry writing capability. We apply it on current LLMs and find that recent LLMs do indeed possess the ability to write classical Chinese poems nearly indistinguishable from those of humans. We also reveal that various open-source LLMs can outperform GPT-4 on this task.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Qwen technical report. arXiv preprint arXiv:2309.16609.
  2. Brendan Bena and Jugal Kalita. 2019. Introducing aspects of creativity in automatic poetry generation. In Proceedings of the 16th International Conference on Natural Language Processing, pages 26–35, International Institute of Information Technology, Hyderabad, India. NLP Association of India.
  3. Sentiment-controllable chinese poetry generation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 4925–4931. International Joint Conferences on Artificial Intelligence Organization.
  4. Where machines could replace humans-and where they can’t (yet). The McKinsey Quarterly, pages 1–12.
  5. David De Cremer and Garry Kasparov. 2021. Ai should augment human intelligence, not replace it. Harvard Business Review, 18:1.
  6. An iterative polishing framework based on quality aware masked language model for chinese poetry generation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7643–7650.
  7. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
  8. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335, Dublin, Ireland. Association for Computational Linguistics.
  9. Edward A Feigenbaum. 2003. Some challenges and grand challenges for computational intelligence. Journal of the ACM (JACM), 50(1):32–40.
  10. Gemini Team Google. 2023. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, arXiv:2312.11805.
  11. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics.
  12. Rhetorically controlled encoder-decoder for Modern Chinese poetry generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1992–2001, Florence, Italy. Association for Computational Linguistics.
  13. Defending humankind: Anthropocentric bias in the appreciation of ai art. Computers in Human Behavior, 143:107707.
  14. OpenAI. 2023. Gpt-4 technical report.
  15. PoeLM: A meter- and rhyme-controllable language model for unsupervised poetry generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3655–3670, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  16. Mastering the game of go without human knowledge. Nature, 550(7676):354–359.
  17. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
  18. Tim Van de Cruys. 2020. Automatic poetry generation from prosaic text. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480, Online. Association for Computational Linguistics.
  19. Can machine generate traditional chinese poetry? a feigenbaum test. In Advances in Brain Inspired Cognitive Systems, pages 34–46, Cham. Springer International Publishing.
  20. Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in Statistics: Methodology and Distribution, pages 196–202. Springer.
  21. Chinese poetry generation with a working memory model. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4553–4559. International Joint Conferences on Artificial Intelligence Organization.
  22. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680, Doha, Qatar. Association for Computational Linguistics.
  23. Jiuge: A human-machine collaborative Chinese classical poetry generation system. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 25–30, Florence, Italy. Association for Computational Linguistics.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.