Chatbot is Not All You Need: Information-rich Prompting for More Realistic Responses (2312.16233v1)
Abstract: Recent LLMs have shown remarkable capabilities in mimicking fictional characters or real humans in conversational settings. However, the realism and consistency of these responses can be further enhanced by providing richer information of the agent being mimicked. In this paper, we propose a novel approach to generate more realistic and consistent responses from LLMs, leveraging five senses, attributes, emotional states, relationship with the interlocutor, and memories. By incorporating these factors, we aim to increase the LLM's capacity for generating natural and realistic reactions in conversational exchanges. Through our research, we expect to contribute to the development of LLMs that demonstrate improved capabilities in mimicking fictional characters. We release a new benchmark dataset and all our codes, prompts, and sample results on our Github: https://github.com/srafsasm/InfoRichBot
- Emotions from text: machine learning for text-based emotion prediction. In Proceedings of human language technology conference and conference on empirical methods in natural language processing, pages 579–586, 2005.
- Identifying expressions of emotion in text. In Text, Speech and Dialogue: 10th International Conference, TSD 2007, Pilsen, Czech Republic, September 3-7, 2007. Proceedings 10, pages 196–205. Springer, 2007.
- Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72, 2005.
- Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- What would harry say? building dialogue agents for characters in a story. arXiv preprint arXiv:2211.06869, 2022.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
- Towards personality-aware chatbots. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 135–145, 2022.
- Long short-term memory. Supervised sequence labelling with recurrent neural networks, pages 37–45, 2012.
- Meet your favorite character: Open-domain chatbot mimicking fictional characters with only a few utterances. arXiv preprint arXiv:2204.10825, 2022.
- Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957, 2017.
- Aloha: Artificial learning of human attributes for dialogue agents. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8155–8163, 2020.
- Hierarchical transformer network for utterance-level emotion recognition. Applied Sciences, 10(13):4447, 2020.
- You truly understand what i need: Intellectual and friendly dialogue agents grounding knowledge and persona. arXiv preprint arXiv:2301.02401, 2023.
- You impress me: Dialogue generation via mutual persona perception. arXiv preprint arXiv:2004.05388, 2020.
- Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852, 2023.
- Larry R Medsker and LC Jain. Recurrent neural networks. Design and Applications, 5:64–67, 2001.
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
- Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019.
- Learning to identify emotions in text. In Proceedings of the 2008 ACM symposium on Applied computing, pages 1556–1560, 2008.
- Learning to speak and act in a fantasy text adventure game. arXiv preprint arXiv:1903.03094, 2019.
- Attention is all you need. Advances in neural information processing systems, 30, 2017.
- Contextualized emotion recognition in conversation as sequence tagging. In Proceedings of the 21th annual meeting of the special interest group on discourse and dialogue, pages 186–195, 2020.
- Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
- Proqa: Structural prompt-based pre-training for unified question answering. arXiv preprint arXiv:2205.04040, 2022.
- Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert. arXiv preprint arXiv:2302.10198, 2023.