OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data (2402.12913v1)
Abstract: This paper mainly describes a unified system for hallucination detection of LLMs, which wins the second prize in the model-agnostic track of the SemEval-2024 Task 6, and also achieves considerable results in the model-aware track. This task aims to detect hallucination with LLMs for three different text-generation tasks without labeled training data. We utilize prompt engineering and few-shot learning to verify the performance of different LLMs on the validation data. Then we select the LLMs with better performance to generate high-quality weakly supervised training data, which not only satisfies the consistency of different LLMs, but also satisfies the consistency of the optimal LLM with different sampling parameters. Furthermore, we finetune different LLMs by using the constructed training data, and finding that a relatively small LLM can achieve a competitive level of performance in hallucination detection, when compared to the large LLMs and the prompt-based approaches using GPT-4.
- A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology.
- Risk taxonomy, mitigation, and assessment benchmarks of large language model systems. arXiv preprint arXiv:2401.05778.
- A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints.
- Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
- Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38.
- Ken Shoemake. 1985. Animating rotation with quaternion curves. In Proceedings of the 12th annual conference on Computer graphics and interactive techniques, pages 245–254.
- General purpose artificial intelligence systems (gpais): Properties, definition, taxonomy, societal implications and responsible governance. Information Fusion, 103:102135.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning, pages 23965–23998. PMLR.
- Ties-merging: Resolving interference when merging models. In Thirty-seventh Conference on Neural Information Processing Systems.
- A survey of large language models. arXiv preprint arXiv:2303.18223.
- Chengcheng Wei (1 paper)
- Ze Chen (38 papers)
- Songtan Fang (1 paper)
- Jiarong He (9 papers)
- Max Gao (2 papers)