Papers
Topics
Authors
Recent
Search
2000 character limit reached

Question Suggestion for Conversational Shopping Assistants Using Product Metadata

Published 2 May 2024 in cs.CL | (2405.01738v1)

Abstract: Digital assistants have become ubiquitous in e-commerce applications, following the recent advancements in Information Retrieval (IR), NLP and Generative AI. However, customers are often unsure or unaware of how to effectively converse with these assistants to meet their shopping needs. In this work, we emphasize the importance of providing customers a fast, easy to use, and natural way to interact with conversational shopping assistants. We propose a framework that employs LLMs to automatically generate contextual, useful, answerable, fluent and diverse questions about products, via in-context learning and supervised fine-tuning. Recommending these questions to customers as helpful suggestions or hints to both start and continue a conversation can result in a smoother and faster shopping experience with reduced conversation overhead and friction. We perform extensive offline evaluations, and discuss in detail about potential customer impact, and the type, length and latency of our generated product questions if incorporated into a real-world shopping assistant.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
  2. Janarthanan Balakrishnan and Yogesh K Dwivedi. 2021. Conversational commerce: entering the next stage of AI-powered digital assistants. Annals of Operations Research (2021), 1–35.
  3. FAQtor: Automatic FAQ generation using online forums. In International Conference on Educational Data Mining. 529–532.
  4. Product Question Answering in E-Commerce: A Survey. arXiv preprint arXiv:2302.08092 (2023).
  5. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 (2023).
  6. Openagi: When llm meets domain experts. Advances in Neural Information Processing Systems 36 (2024).
  7. Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask. In Findings of the Association for Computational Linguistics: ACL 2022, Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, Dublin, Ireland, 2131–2146. https://doi.org/10.18653/v1/2022.findings-acl.168
  8. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.
  9. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688 (2023).
  10. Unsupervised FAQ retrieval with question generation and BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 807–812.
  11. Nikahat Mulla and Prachi Gharpure. 2023. Automatic question generation: a review of methodologies, datasets, evaluation metrics, and applications. Progress in Artificial Intelligence 12, 1 (2023), 1–32.
  12. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 188–197.
  13. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922 (2023).
  14. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 (2022).
  15. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837.
  16. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453 (2023).
  17. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792 (2023).
  18. Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 (2023).
  19. Toolqa: A dataset for llm question answering with external tools. Advances in Neural Information Processing Systems 36 (2024).
Citations (5)

Summary

  • The paper introduces a framework that combines in-context learning and supervised fine-tuning to automatically generate product-related questions.
  • By using product metadata and buyer reviews, the method enhances question relevance, fluency, and diversity for smoother customer interactions.
  • Evaluation through GPT-4 and human assessment confirms improved customer experience and efficient, real-time question generation.

Question Suggestion for Conversational Shopping Assistants Using Product Metadata

Introduction

The paper "Question Suggestion for Conversational Shopping Assistants Using Product Metadata" introduces a framework for enhancing the conversational capabilities of shopping assistants. By leveraging advancements in Information Retrieval (IR), NLP, and LLMs, the paper proposes using product metadata to generate useful, answerable, and diverse questions to guide customer interactions. This approach addresses shoppers' uncertainty and enhances the efficiency of shopping assistants by reducing friction in the conversational flow.

Framework and Methodology

The proposed system leverages LLMs to automatically generate product-related questions through two primary techniques: In-Context Learning (ICL) and Supervised Fine-Tuning (SFT). The goal is to produce questions grounded in the product's metadata and buyer reviews that are relevant, useful, answerable, fluent, styled appropriately for customer inquiries, and diverse.

  • In-Context Learning: This method involves designing a prompt template that utilizes the LLM’s pre-existing knowledge to generate questions that meet predetermined criteria without explicit training examples.
  • Supervised Fine-Tuning: This involves fine-tuning a large model with a curated dataset of product contexts and corresponding questions to enhance the generation quality. The dataset is created with minimal human intervention by initially employing the ICL approach and refining through manual inspection.

Evaluation

The generated questions are assessed both automatically using GPT-4 and through human evaluation. Key metrics include relevance, usefulness, answerability, fluency, and stylistic appropriateness. Both ICL and SFT approaches demonstrated strong performance across these metrics, with particular strengths in relevance and fluency. Differences in performance between the zero-shot, few-shot ICL, and SFT variants were noted, with the few-shot ICL sometimes outperforming SFT in producing stylistically appropriate questions.

Practical Implications

Incorporating such a question suggestion framework into real-world shopping assistants has potential benefits, such as:

  • Improved Customer Experience: By providing pre-formulated questions, customers can navigate product information more efficiently, potentially increasing engagement and satisfaction.
  • Efficient Conversations: With both broad and specific questions integrated into the shopping experience, customers have fewer obstacles in finding relevant information without typing extensive queries.
  • Reduced Latency: Techniques like caching and streaming can be employed to handle real-time question generation without significant delays.

Conclusion

The research demonstrates a feasible method for enhancing conversational interactions with shopping assistants. By generating diverse and contextually grounded questions, this framework contributes to more intuitive and effective e-commerce experiences. Future directions involve integrating conversational history for personalized question generation and leveraging real-time feedback loops to refine the question generation process.

Further exploration in using instruction fine-tuning and incorporating customer behavior analytics promises additional refinements, driving more adaptive and customer-friendly shopping assistants.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.