Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog Evaluation (2211.11617v1)

Published 21 Nov 2022 in cs.CL

Abstract: Practical dialog systems need to deal with various knowledge sources, noisy user expressions, and the shortage of annotated data. To better solve the above problems, we propose CGoDial, new challenging and comprehensive Chinese benchmark for multi-domain Goal-oriented Dialog evaluation. It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources: 1) a slot-based dialog (SBD) dataset with table-formed knowledge, 2) a flow-based dialog (FBD) dataset with tree-formed knowledge, and a retrieval-based dialog (RBD) dataset with candidate-formed knowledge. To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing. The proposed experimental settings include the combinations of training with either the entire training set or a few-shot training set, and testing with either the standard test set or a hard test subset, which can assess model capabilities in terms of general prediction, fast adaptability and reliable robustness.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yinpei Dai (17 papers)
  2. Wanwei He (10 papers)
  3. Bowen Li (166 papers)
  4. Yuchuan Wu (33 papers)
  5. Zheng Cao (49 papers)
  6. Zhongqi An (1 paper)
  7. Jian Sun (415 papers)
  8. Yongbin Li (128 papers)
Citations (12)