Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RAG based Question-Answering for Contextual Response Prediction System (2409.03708v2)

Published 5 Sep 2024 in cs.CL and cs.IR

Abstract: LLMs have shown versatility in various NLP tasks, including their potential as effective question-answering systems. However, to provide precise and relevant information in response to specific customer queries in industry settings, LLMs require access to a comprehensive knowledge base to avoid hallucinations. Retrieval Augmented Generation (RAG) emerges as a promising technique to address this challenge. Yet, developing an accurate question-answering framework for real-world applications using RAG entails several challenges: 1) data availability issues, 2) evaluating the quality of generated content, and 3) the costly nature of human evaluation. In this paper, we introduce an end-to-end framework that employs LLMs with RAG capabilities for industry use cases. Given a customer query, the proposed system retrieves relevant knowledge documents and leverages them, along with previous chat history, to generate response suggestions for customer service agents in the contact centers of a major retail company. Through comprehensive automated and human evaluations, we show that this solution outperforms the current BERT-based algorithms in accuracy and relevance. Our findings suggest that RAG-based LLMs can be an excellent support to human customer service representatives by lightening their workload.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sriram Veturi (1 paper)
  2. Saurabh Vaichal (2 papers)
  3. Nafis Irtiza Tripto (8 papers)
  4. Reshma Lal Jagadheesh (1 paper)
  5. Nian Yan (3 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets