Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Syntactic Fusion: Enhancing Aspect-Level Sentiment Analysis Through Multi-Tree Graph Integration (2312.03738v1)

Published 28 Nov 2023 in cs.CL

Abstract: Recent progress in aspect-level sentiment classification has been propelled by the incorporation of graph neural networks (GNNs) leveraging syntactic structures, particularly dependency trees. Nevertheless, the performance of these models is often hampered by the innate inaccuracies of parsing algorithms. To mitigate this challenge, we introduce SynthFusion, an innovative graph ensemble method that amalgamates predictions from multiple parsers. This strategy blends diverse dependency relations prior to the application of GNNs, enhancing robustness against parsing errors while avoiding extra computational burdens. SynthFusion circumvents the pitfalls of overparameterization and diminishes the risk of overfitting, prevalent in models with stacked GNN layers, by optimizing graph connectivity. Our empirical evaluations on the SemEval14 and Twitter14 datasets affirm that SynthFusion not only outshines models reliant on single dependency trees but also eclipses alternative ensemble techniques, achieving this without an escalation in model complexity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (71)
  1. Issues and challenges of aspect-based sentiment analysis: A comprehensive survey. IEEE Transactions on Affective Computing, 13(2):845–863, 2020.
  2. Lasuie: Unifying information extraction with latent adaptive structure-aware generative language model. In Proceedings of the Advances in Neural Information Processing Systems, NeurIPS 2022, pages 15460–15475, 2022a.
  3. Detecting aspects and sentiment in customer reviews. In 8th International Workshop on Semantic Evaluation (SemEval), pages 437–442.
  4. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), pages 49–54, 2014.
  5. Graph convolution over pruned dependency trees improves relation extraction. EMNLP, 2018.
  6. Global inference with explicit syntactic and discourse structures for dialogue-level relation extraction. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, pages 4082–4088, 2022b.
  7. Learn from syntax: Improving pair-wise aspect and opinion terms extraction with rich syntactic knowledge. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pages 3957–3963, 2021a.
  8. A semantic and syntactic enhanced neural model for financial sentiment analysis. Information Processing & Management, 59(4):102943, 2022.
  9. Latent emotion memory for multi-label emotion classification. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7692–7699, 2020a.
  10. Effective lstms for target-dependent sentiment classification. arXiv preprint arXiv:1512.01100, 2015.
  11. Mastering the explicit opinion-role interaction: Syntax-aided neural transition system for unified opinion role labeling. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, pages 11513–11521, 2022.
  12. Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893, 2017.
  13. Cross-lingual semantic role labeling with high-quality translated training corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7014–7026, 2020b.
  14. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 452–461, 2017.
  15. End-to-end semantic role labeling with neural transition-based model. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12803–12811, 2021a.
  16. Mi Zhang and Tieyun Qian. Convolution over hierarchical syntactic and lexical graphs for aspect level sentiment analysis. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 3540–3549, 2020.
  17. High-order pair-wise aspect and opinion terms extraction with edge-enhanced syntactic graph convolution. IEEE ACM Trans. Audio Speech Lang. Process., 29:2396–2406, 2021b.
  18. Inducing target-specific latent structures for aspect sentiment classification. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 5596–5607, 2020a.
  19. Matching structure for dual learning. In Proceedings of the International Conference on Machine Learning, ICML, pages 6373–6391, 2022c.
  20. Syntax-aware graph attention network for aspect-level sentiment classification. In Proceedings of the 28th international conference on computational linguistics, pages 799–810, 2020.
  21. Encoder-decoder based unified semantic role labeling with label-aware syntax. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12794–12802, 2021b.
  22. Nonautoregressive encoder-decoder neural framework for end-to-end aspect-based sentiment triplet extraction. IEEE Transactions on Neural Networks and Learning Systems, 34(9):5544–5556, 2023a.
  23. Selective attention based graph convolutional networks for aspect-level sentiment classification. arXiv preprint arXiv:1910.10857, 2019.
  24. Making decision like human: Joint aspect category sentiment analysis and rating prediction with fine-to-coarse reasoning. In Proceedings of the ACM Web Conference 2022, WWW, pages 3042–3051, 2022d.
  25. Syntax-aware aspect level sentiment classification with graph attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5472–5480, 2019.
  26. Aspect-based sentiment classification with aspect-specific graph convolutional networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4560–4570, 2019.
  27. Aspect-level sentiment analysis via convolution over dependency tree. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5683–5692, 2019.
  28. Relational graph attention network for aspect-based sentiment analysis. arXiv preprint arXiv:2004.12362, 2020.
  29. Unified named entity recognition as word-word relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 10965–10973, 2022.
  30. Enriching contextualized language model from knowledge graph for biomedical information extraction. Briefings in Bioinformatics, 22(3), 2021c.
  31. Boundaries and edges rethinking: An end-to-end neural model for overlapping entity relation extraction. Information Processing & Management, 57(6):102311, 2020c.
  32. Relational graph attention networks. arXiv preprint arXiv:1904.05811, 2019.
  33. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018a.
  34. Attention modeling for targeted sentiment. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 572–577, 2017.
  35. Target-sensitive memory networks for aspect sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 957–967, 2018.
  36. Multi-grained attention network for aspect-level sentiment classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3433–3442, 2018.
  37. Left-center-right separated neural network for aspect-based sentiment analysis with rotatory attention. arXiv preprint arXiv:1802.00892, 2018.
  38. Learning latent opinions for aspect-level sentiment classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  39. Hierarchical attention based position-aware network for aspect-level sentiment analysis. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 181–189, 2018b.
  40. Transformation networks for target-oriented sentiment classification. arXiv preprint arXiv:1805.01086, 2018c.
  41. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT 2019, page pages 4171–4186, 2018.
  42. Attentional encoder network for targeted sentiment classification. arXiv preprint arXiv:1902.09314, 2019.
  43. Bert post-training for review reading comprehension and aspect-based sentiment analysis. arXiv preprint arXiv:1904.02232, 2019.
  44. Adapt or get left behind: Domain adaptation through bert language model finetuning for aspect-target sentiment classification. arXiv preprint arXiv:1908.11860, 2019.
  45. Rethinking boundaries: End-to-end recognition of discontinuous mentions with pointer networks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 12785–12793, 2021d.
  46. Reproducibility, replicability and beyond: Assessing production readiness of aspect based sentiment analysis in the wild. In Advances in Information Retrieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43, pages 92–106. Springer, 2021.
  47. On the robustness of aspect-based sentiment analysis: Rethinking model, data, and training. ACM Transactions on Information Systems, 41(2):50:1–50:32, 2023b.
  48. Knowledge-enhanced event relation extraction via event ontology prompt. Inf. Fusion, 100:101919, 2023.
  49. Better combine them together! integrating syntactic constituency and dependency representations for semantic role labeling. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, pages 549–559, 2021e.
  50. Aspect sentiment classification with document-level sentiment preference modeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3667–3677, 2020b.
  51. MRN: A locally and globally mention-based reasoning network for document-level relation extraction. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1359–1370, 2021.
  52. TKDP: threefold knowledge-enriched deep prompt tuning for few-shot named entity recognition. CoRR, abs/2306.03974, 2023.
  53. Semeval-2016 task 5: Aspect based sentiment analysis. In ProWorkshop on Semantic Evaluation (SemEval-2016), pages 19–30. Association for Computational Linguistics, 2016.
  54. Inheriting the wisdom of predecessors: A multiplex cascade framework for unified aspect-based sentiment analysis. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI, pages 4096–4103, 2022e.
  55. Entity-centered cross-document relation extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9871–9881, 2022.
  56. Retrofitting structure-aware transformer language model for end tasks. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 2151–2161, 2020d.
  57. Syntactic information and multiple semantic segments for aspect-based sentiment classification. International Journal of Asian Language Processing, 31(03n04):2250006, 2021.
  58. OneEE: A one-stage framework for fast overlapping and nested event extraction. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1953–1964, 2022.
  59. Effective attention modeling for aspect-level sentiment classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1121–1131, 2018.
  60. Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2509–2514, 2015.
  61. Improving text understanding via deep syntax-semantics communication. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 84–93, 2020e.
  62. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734, 2016.
  63. Next-gpt: Any-to-any multimodal llm, 2023.
  64. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
  65. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6578–6588, Online, July 2020. Association for Computational Linguistics. doi:10.18653/v1/2020.acl-main.588. URL https://www.aclweb.org/anthology/2020.acl-main.588.
  66. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60, 2014.
  67. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020. URL https://nlp.stanford.edu/pubs/qi2020stanza.pdf.
  68. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, July 2018. Association for Computational Linguistics.
  69. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
  70. Deep graph library: Towards efficient and scalable deep learning on graphs. 2019.
  71. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.

Summary

We haven't generated a summary for this paper yet.