2000 character limit reached
Team Trifecta at Factify5WQA: Setting the Standard in Fact Verification with Fine-Tuning (2403.10281v1)
Published 15 Mar 2024 in cs.CL, cs.AI, and cs.LG
Abstract: In this paper, we present Pre-CoFactv3, a comprehensive framework comprised of Question Answering and Text Classification components for fact verification. Leveraging In-Context Learning, Fine-tuned LLMs, and the FakeNet model, we address the challenges of fact verification. Our experiments explore diverse approaches, comparing different Pre-trained LLMs, introducing FakeNet, and implementing various ensemble methods. Notably, our team, Trifecta, secured first place in the AAAI-24 Factify 3.0 Workshop, surpassing the baseline accuracy by 103% and maintaining a 70% lead over the second competitor. This success underscores the efficacy of our approach and its potential contributions to advancing fact verification research.
- Team triple-check at factify 2: Parameter-efficient large foundation models with feature representations for multi-modal fact verification, arXiv preprint arXiv:2302.07740 (2023).
- Factify 2: A multimodal fake news and satire news dataset, in: Proceedings of DeFactify 2: Second Workshop on Multimodal Fact-Checking and Hate Speech Detection, CEUR, 2023.
- Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing, arXiv preprint arXiv:2111.09543 (2021).
- Information credibility on twitter, in: Proceedings of the 20th international conference on World wide web, 2011, pp. 675–684.
- Enquiring minds: Early detection of rumors in social media from enquiry posts, in: Proceedings of the 24th international conference on world wide web, 2015, pp. 1395–1405.
- Fndnet–a deep convolutional neural network for fake news detection, Cognitive Systems Research 61 (2020) 32–44.
- Fake news detection using bi-directional lstm-recurrent neural network, Procedia Computer Science 165 (2019) 74–82.
- Eann: Event adversarial neural networks for multi-modal fake news detection, in: Proceedings of the 24th acm sigkdd international conference on knowledge discovery & data mining, 2018, pp. 849–857.
- Multimodal fusion with co-attention networks for fake news detection, in: Findings of the association for computational linguistics: ACL-IJCNLP 2021, 2021, pp. 2560–2569.
- Overview of factify5wqa: Fact verification through 5w question-answering, in: proceedings of DeFactify 3.0: third workshop on Multimodal Fact-Checking and Hate Speech Detection, CEUR, 2024.
- Attention is all you need, Advances in neural information processing systems 30 (2017).
- Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
- Language models are few-shot learners, Advances in neural information processing systems 33 (2020) 1877–1901.
- SQuAD: 100,000+ Questions for Machine Comprehension of Text, arXiv e-prints (2016) arXiv:1606.05250. arXiv:1606.05250.
- Logically at factify 2022: Multimodal fact verification, arXiv preprint arXiv:2112.09253 (2021).
- Ino at factify 2: Structure coherence based multi-modal fact verification, arXiv preprint arXiv:2303.01510 (2023).
- FACTIFY-5WQA: 5W aspect-based fact verification through question answering, in: A. Rogers, J. Boyd-Graber, N. Okazaki (Eds.), Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Toronto, Canada, 2023, pp. 10421–10440. URL: https://aclanthology.org/2023.acl-long.581. doi:10.18653/v1/2023.acl-long.581.
- Bleu: a method for automatic evaluation of machine translation, in: Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 2002, pp. 311–318.
- Roberta: A robustly optimized bert pretraining approach, arXiv preprint arXiv:1907.11692 (2019).
- Language models are unsupervised multitask learners, OpenAI blog 1 (2019) 9.
- Exploring the limits of transfer learning with a unified text-to-text transformer, The Journal of Machine Learning Research 21 (2020) 5485–5551.
- Deberta: Decoding-enhanced bert with disentangled attention, arXiv preprint arXiv:2006.03654 (2020).
- Chain-of-thought prompting elicits reasoning in large language models, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, volume 35, Curran Associates, Inc., 2022, pp. 24824–24837. URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf.
- Llama 2: Open foundation and fine-tuned chat models, arXiv preprint arXiv:2307.09288 (2023).
- Shang-Hsuan Chiang (4 papers)
- Ming-Chih Lo (4 papers)
- Lin-Wei Chao (2 papers)
- Wen-Chih Peng (47 papers)