MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection (2403.00964v1)
Abstract: In Natural Language Generation (NLG), contemporary LLMs face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting "hallucinations". The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.
- Bert: Pre-training of deep bidirectional transformers for language understanding.
- Pre-trained models: Past, present and future. AI Open, 2:225–250.
- Deberta: Decoding-enhanced bert with disentangled attention.
- A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions.
- Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38.
- A review of hybrid and ensemble in deep learning for natural language processing. arXiv preprint arXiv:2312.05589.
- Solar 10.7b: Scaling large language models with simple yet effective depth up-scaling.
- baρ𝜌\rhoitalic_ρtti at geolingit: Beyond boundaries, enhancing geolocation prediction and dialect classification on social media in italy. In CEUR Workshop Proceedings. CEUR.
- Improving language understanding by generative pre-training.
- Curriculum learning: A survey.
- Openchat: Advancing open-source language models with mixed-quality data. arXiv preprint arXiv:2309.11235.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.