Multi-Evidence based Fact Verification via A Confidential Graph Neural Network (2405.10481v1)
Abstract: Fact verification tasks aim to identify the integrity of textual contents according to the truthful corpus. Existing fact verification models usually build a fully connected reasoning graph, which regards claim-evidence pairs as nodes and connects them with edges. They employ the graph to propagate the semantics of the nodes. Nevertheless, the noisy nodes usually propagate their semantics via the edges of the reasoning graph, which misleads the semantic representations of other nodes and amplifies the noise signals. To mitigate the propagation of noisy semantic information, we introduce a Confidential Graph Attention Network (CO-GAT), which proposes a node masking mechanism for modeling the nodes. Specifically, CO-GAT calculates the node confidence score by estimating the relevance between the claim and evidence pieces. Then, the node masking mechanism uses the node confidence scores to control the noise information flow from the vanilla node to the other graph nodes. CO-GAT achieves a 73.59% FEVER score on the FEVER dataset and shows the generalization ability by broadening the effectiveness to the science-specific domain.
- L. Cheng, R. Guo, K. Shu, and H. Liu, “Causal understanding of fake news dissemination on social media,” in Proceedings of KDD, 2021.
- R. Zafarani, X. Zhou, K. Shu, and H. Liu, “Fake news research: Theories, detection strategies, and open problems,” in Proceedings of KDD, 2019.
- C. Yang, M. Sun, W. X. Zhao, Z. Liu, and E. Y. Chang, “A neural network approach to jointly modeling social networks and mobile trajectories,” ACM Trans. Inf. Syst., 2017.
- S. Vosoughi, D. Roy, and S. Aral, “The spread of true and false news online,” science, no. 6380, 2018.
- N. Hassan, C. Li, and M. Tremayne, “Detecting check-worthy factual claims in presidential debates,” in Proceedings of CIKM, 2015.
- A. Vlachos and S. Riedel, “Fact checking: Task definition and dataset construction,” in Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, 2014.
- Z. Guo, M. Schlichtkrull, and A. Vlachos, “A survey on automated fact-checking,” Transactions of the Association for Computational Linguistics, 2022.
- D. Wadden, S. Lin, K. Lo, L. L. Wang, M. van Zuylen, A. Cohan, and H. Hajishirzi, “Fact or fiction: Verifying scientific claims,” in Proceedings of EMNLP, 2020.
- Y. Jiang, S. Bordia, Z. Zhong, C. Dognin, M. Singh, and M. Bansal, “HoVer: A dataset for many-hop fact extraction and claim verification,” in Proceedings of EMNLP Findings, 2020.
- J. Park, S. Min, J. Kang, L. Zettlemoyer, and H. Hajishirzi, “Faviq: Fact verification from information-seeking questions,” in Proceedings of ACL, 2022.
- J. Ma, W. Gao, S. Joty, and K.-F. Wong, “Sentence-level evidence embedding for claim verification with hierarchical attention networks,” in Proceedings of ACL, 2019.
- H. Wan, H. Chen, J. Du, W. Luo, and R. Ye, “A DQN-based approach to finding precise evidences for fact verification,” in Proceedings of ACL, 2021.
- G. Bekoulis, C. Papagiannopoulou, and N. Deligiannis, “A review on fact extraction and verification,” ACM Comput. Surv., no. 2, 2023.
- I. Augenstein, C. Lioma, D. Wang, L. Chaves Lima, C. Hansen, C. Hansen, and J. G. Simonsen, “MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims,” in Proceedings of EMNLP, 2019.
- T. Schuster, A. Fisch, and R. Barzilay, “Get your vitamin C! robust fact verification with contrastive evidence,” in Proceedings of NAACL-HLT, 2021.
- J. Thorne, A. Vlachos, O. Cocarascu, C. Christodoulopoulos, and A. Mittal, “The fact extraction and VERification (FEVER) shared task,” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018.
- T. Schuster, D. Shah, Y. J. S. Yeo, D. Roberto Filizzola Ortiz, E. Santus, and R. Barzilay, “Towards debiasing fact verification models,” in Proceedings of EMNLP, 2019.
- J. Kim, S. Park, Y. Kwon, Y. Jo, J. Thorne, and E. Choi, “FactKG: Fact verification via reasoning on knowledge graphs,” in Proceedings of ACL, 2023.
- M. Fajcik, P. Motlicek, and P. Smrz, “Claim-dissector: An interpretable fact-checking system with joint re-ranking and veracity prediction,” in Proceedings of ACL Findings, 2023.
- W. Yin and D. Roth, “TwoWingOS: A two-wing optimization strategy for evidential claim verification,” in Proceedings of EMNLP, 2018.
- D. Chen, A. Fisch, J. Weston, and A. Bordes, “Reading Wikipedia to answer open-domain questions,” in Proceedings of ACL, 2017.
- R. Aly and A. Vlachos, “Natural logic-guided autoregressive multi-hop document retrieval for fact verification,” in Proceedings of EMNLP, 2022.
- Y. Nie, H. Chen, and M. Bansal, “Combining fact extraction and verification with neural semantic matching networks,” in Proceedings of AAAI, 2019.
- A. Hanselowski, H. Zhang, Z. Li, D. Sorokin, B. Schiller, C. Schulz, and I. Gurevych, “UKP-athene: Multi-sentence textual entailment for claim verification,” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018.
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of NAACL-HLT, 2019.
- Z. Liu, C. Xiong, M. Sun, and Z. Liu, “Fine-grained fact verification with kernel graph attention network,” in Proceedings of ACL, 2020.
- J. Zhou, X. Han, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “GEAR: Graph-based evidence aggregating and reasoning for fact verification,” in Proceedings of ACL, 2019.
- J. Chen, Q. Bao, C. Sun, X. Zhang, J. Chen, H. Zhou, Y. Xiao, and L. Li, “Loren: Logic-regularized reasoning for interpretable fact verification,” in Proceedings of AAAI, no. 10, 2022.
- E. Park, J. Lee, D. H. Jeon, S. Kim, I. Kang, and S. Na, “SISER: semantic-infused selective graph reasoning for fact verification,” in Proceedings of COLING, 2022.
- T. Yoneda, J. Mitchell, J. Welbl, P. Stenetorp, and S. Riedel, “UCL machine reading group: Four factor framework for fact finding (HexaF),” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018.
- J. Si, D. Zhou, T. Li, X. Shi, and Y. He, “Topic-aware evidence reasoning and stance-aware aggregation for fact verification,” in Proceedings of ACL, 2021.
- P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in Proceedings of ICLR, 2018.
- L. Wang, P. Zhang, X. Lu, L. Zhang, C. Yan, and C. Zhang, “Qadialmoe: Question-answering dialogue based fact verification with mixture of experts,” in Proceedings of EMNLP Findings, 2022.
- Y. Zhang, D. Merck, E. Tsai, C. D. Manning, and C. Langlotz, “Optimizing the factual correctness of a summary: A study of summarizing radiology reports,” in Proceedings of ACL, 2020.
- A. Parikh, O. Täckström, D. Das, and J. Uszkoreit, “A decomposable attention model for natural language inference,” in Proceedings of EMNLP, 2016.
- A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improving language understanding by generative pre-training,” in Proceedings of Technical report, OpenAI, 2018.
- Q. Chen, X. Zhu, Z.-H. Ling, S. Wei, H. Jiang, and D. Inkpen, “Enhanced LSTM for natural language inference,” in Proceedings of ACL, 2017.
- R. Ghaeini, S. A. Hasan, V. Datla, J. Liu, K. Lee, A. Qadir, Y. Ling, A. Prakash, X. Fern, and O. Farri, “DR-BiLSTM: Dependent reading bidirectional LSTM for natural language inference,” in Proceedings of NAACL-HLT, 2018.
- M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” in Proceedings of NAACL-HLT, 2018.
- T. Li, X. Zhu, Q. Liu, Q. Chen, Z. Chen, and S. Wei, “Several experiments on investigating pretraining and knowledge-enhanced models for natural language inference,” ArXiv preprint, 2019.
- C. Hidey and M. Diab, “Team SWEEPer: Joint sentence extraction and fact checking with pointer networks,” in Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), 2018.
- Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized bert pretraining approach,” ArXiv preprint, 2019.
- M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in Proceedings of ACL, 2020.
- C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine Learning Research, 2020.
- Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” in Proceedings of NeurIPS, 2019.
- A. Soleimani, C. Monz, and M. Worring, “BERT for evidence retrieval and claim verification,” in Proc. of ECIR, 2020.
- K. Jiang, R. Pradeep, and J. Lin, “Exploring listwise evidence reasoning with t5 for fact verification,” in Proceedings of ACL, 2021.
- N. Lee, Y. Bang, A. Madotto, and P. Fung, “Towards few-shot fact-checking via perplexity,” in Proceedings of NAACL-HLT, 2021.
- W. Yu, D. Iter, S. Wang, Y. Xu, M. Ju, S. Sanyal, C. Zhu, M. Zeng, and M. Jiang, “Generate rather than retrieve: Large language models are strong context generators,” in Proceedings of ICLR, 2023.
- P. S. H. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W. Yih, T. Rocktäschel, S. Riedel, and D. Kiela, “Retrieval-augmented generation for knowledge-intensive NLP tasks,” in Proceedings of NeurIPS, 2020.
- M. Lee, S. Won, J. Kim, H. Lee, C. Park, and K. Jung, “Crossaug: A contrastive data augmentation method for debiasing fact verification models,” in Proceedings of CIKM, 2021.
- L. Pan, W. Chen, W. Xiong, M.-Y. Kan, and W. Y. Wang, “Zero-shot fact verification by claim generation,” in Proceedings of ACL, 2021.
- A. Soleimani, C. Monz, and M. Worring, “Bert for evidence retrieval and claim verification,” in Advances in Information Retrieval, 2020.
- C. Kruengkrai, J. Yamagishi, and X. Wang, “A multi-level attention model for evidence-based fact checking,” in Proceedings of ACL Findings, 2021.
- R. Pradeep, X. Ma, R. Nogueira, and J. Lin, “Scientific claim verification with VerT5erini,” in Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, 2021.
- W. Xu, Q. Liu, S. Wu, and L. Wang, “Counterfactual debiasing for fact verification,” in Proceedings of ACL, 2023.
- A. Krishna, S. Riedel, and A. Vlachos, “Proofver: Natural logic theorem proving for fact verification,” Trans. Assoc. Comput. Linguistics, 2022.
- Z. Chen, S. C. Hui, F. Zhuang, L. Liao, F. Li, M. Jia, and J. Li, “Evidencenet: Evidence fusion network for fact verification,” in Proceedings of the ACM Web Conference 2022, 2022.
- H. Wang, Y. Li, Z. Huang, and Y. Dou, “IMCI: Integrate multi-view contextual information for fact extraction and verification,” in Proceedings of COLING, 2022.
- A. M. Barik, W. Hsu, and M. L. Lee, “Incorporating external knowledge for evidence-based fact verification,” in Companion Proceedings of the Web Conference 2022, 2022.
- W. Zhong, J. Xu, D. Tang, Z. Xu, N. Duan, M. Zhou, J. Wang, and J. Yin, “Reasoning over semantic-level graph for fact checking,” in Proceedings of ACL, 2020.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of ICLR, 2017.
- Z. Ma, J. Li, G. Li, and Y. Cheng, “GLAF: global-to-local aggregation and fission network for semantic level fact verification,” in Proceedings of COLING, 2022.
- C. Chen, F. Cai, X. Hu, J. Zheng, Y. Ling, and H. Chen, “An entity-graph based reasoning method for fact verification,” Information Processing & Management, no. 3, 2021.
- K. Clark, M. Luong, Q. V. Le, and C. D. Manning, “ELECTRA: pre-training text encoders as discriminators rather than generators,” in Proceedings of ICLR, 2020.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of NeurIPS, 2017.
- K. Clark, U. Khandelwal, O. Levy, and C. D. Manning, “What does BERT look at? an analysis of BERT’s attention,” in Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2019.
- S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks?” in Proceedings of ICLR, 2021.
- J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal, “FEVER: a large-scale dataset for fact extraction and VERification,” in Proceedings of NAACL-HLT, 2018.
- Z. Liu, C. Xiong, Z. Dai, S. Sun, M. Sun, and Z. Liu, “Adapting open domain fact extraction and verification to COVID-FACT through in-domain language modeling,” in Proceedings of EMNLP Findings, 2020.
- D. Ye, Y. Lin, J. Du, Z. Liu, P. Li, M. Sun, and Z. Liu, “Coreferential Reasoning Learning for Language Representation,” in Proceedings of EMNLP, 2020.
- M. Gardner, J. Grus, M. Neumann, O. Tafjord, P. Dasigi, N. F. Liu, M. Peters, M. Schmitz, and L. Zettlemoyer, “AllenNLP: A deep semantic natural language processing platform,” in Proceedings of Workshop for NLP Open Source Software (NLP-OSS), 2018.
- S. Wang, Y. Liu, C. Wang, H. Luan, and M. Sun, “Improving back-translation with uncertainty-based confidence estimation,” in Proceedings of EMNLP, 2019.
- M. Ott, M. Auli, D. Grangier, and M. Ranzato, “Analyzing uncertainty in neural machine translation,” in Proceedings of ICML, 2018.
- J. Ni, G. Hernandez Abrego, N. Constant, J. Ma, K. Hall, D. Cer, and Y. Yang, “Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models,” in Proceedings of ACL Findings, 2022.
- Yuqing Lan (8 papers)
- Zhenghao Liu (77 papers)
- Yu Gu (218 papers)
- Xiaoyuan Yi (42 papers)
- Xiaohua Li (27 papers)
- Liner Yang (22 papers)
- Ge Yu (63 papers)