Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Question Suggestion for Conversational Shopping Assistants Using Product Metadata (2405.01738v1)

Published 2 May 2024 in cs.CL

Abstract: Digital assistants have become ubiquitous in e-commerce applications, following the recent advancements in Information Retrieval (IR), NLP and Generative AI. However, customers are often unsure or unaware of how to effectively converse with these assistants to meet their shopping needs. In this work, we emphasize the importance of providing customers a fast, easy to use, and natural way to interact with conversational shopping assistants. We propose a framework that employs LLMs to automatically generate contextual, useful, answerable, fluent and diverse questions about products, via in-context learning and supervised fine-tuning. Recommending these questions to customers as helpful suggestions or hints to both start and continue a conversation can result in a smoother and faster shopping experience with reduced conversation overhead and friction. We perform extensive offline evaluations, and discuss in detail about potential customer impact, and the type, length and latency of our generated product questions if incorporated into a real-world shopping assistant.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
  2. Janarthanan Balakrishnan and Yogesh K Dwivedi. 2021. Conversational commerce: entering the next stage of AI-powered digital assistants. Annals of Operations Research (2021), 1–35.
  3. FAQtor: Automatic FAQ generation using online forums. In International Conference on Educational Data Mining. 529–532.
  4. Product Question Answering in E-Commerce: A Survey. arXiv preprint arXiv:2302.08092 (2023).
  5. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 (2023).
  6. Openagi: When llm meets domain experts. Advances in Neural Information Processing Systems 36 (2024).
  7. Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask. In Findings of the Association for Computational Linguistics: ACL 2022, Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, Dublin, Ireland, 2131–2146. https://doi.org/10.18653/v1/2022.findings-acl.168
  8. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.
  9. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688 (2023).
  10. Unsupervised FAQ retrieval with question generation and BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 807–812.
  11. Nikahat Mulla and Prachi Gharpure. 2023. Automatic question generation: a review of methodologies, datasets, evaluation metrics, and applications. Progress in Artificial Intelligence 12, 1 (2023), 1–32.
  12. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 188–197.
  13. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 (2023).
  14. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022).
  15. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
  16. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453 (2023).
  17. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792 (2023).
  18. Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023).
  19. Toolqa: A dataset for llm question answering with external tools. Advances in Neural Information Processing Systems 36 (2024).
Citations (5)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets