Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DomainRAG: A Chinese Benchmark for Evaluating Domain-specific Retrieval-Augmented Generation (2406.05654v2)

Published 9 Jun 2024 in cs.CL and cs.IR

Abstract: Retrieval-Augmented Generation (RAG) offers a promising solution to address various limitations of LLMs, such as hallucination and difficulties in keeping up with real-time updates. This approach is particularly critical in expert and domain-specific applications where LLMs struggle to cover expert knowledge. Therefore, evaluating RAG models in such scenarios is crucial, yet current studies often rely on general knowledge sources like Wikipedia to assess the models' abilities in solving common-sense problems. In this paper, we evaluated LLMs by RAG settings in a domain-specific context, college enroLLMent. We identified six required abilities for RAG models, including the ability in conversational RAG, analyzing structural information, faithfulness to external knowledge, denoising, solving time-sensitive problems, and understanding multi-document interactions. Each ability has an associated dataset with shared corpora to evaluate the RAG models' performance. We evaluated popular LLMs such as Llama, Baichuan, ChatGLM, and GPT models. Experimental results indicate that existing closed-book LLMs struggle with domain-specific questions, highlighting the need for RAG models to solve expert problems. Moreover, there is room for RAG models to improve their abilities in comprehending conversational history, analyzing structural information, denoising, processing multi-document interactions, and faithfulness in expert knowledge. We expect future studies could solve these problems better.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shuting Wang (11 papers)
  2. Jiehan Cheng (4 papers)
  3. Yuqi Fu (4 papers)
  4. Peidong Guo (3 papers)
  5. Kun Fang (93 papers)
  6. Yutao Zhu (63 papers)
  7. Zhicheng Dou (113 papers)
  8. Jiongnan Liu (7 papers)
  9. Shiren Song (2 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com