Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoding the Silent Majority: Inducing Belief Augmented Social Graph with Large Language Model for Response Forecasting (2310.13297v1)

Published 20 Oct 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Automatic response forecasting for news media plays a crucial role in enabling content producers to efficiently predict the impact of news releases and prevent unexpected negative outcomes such as social conflict and moral injury. To effectively forecast responses, it is essential to develop measures that leverage the social dynamics and contextual information surrounding individuals, especially in cases where explicit profiles or historical actions of the users are limited (referred to as lurkers). As shown in a previous study, 97% of all tweets are produced by only the most active 25% of users. However, existing approaches have limited exploration of how to best process and utilize these important features. To address this gap, we propose a novel framework, named SocialSense, that leverages a LLM to induce a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics. We hypothesize that the induced graph that bridges the gap between distant users who share similar beliefs allows the model to effectively capture the response patterns. Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings, demonstrating its effectiveness in response forecasting. Moreover, the analysis reveals the framework's capability to effectively handle unseen user and lurker scenarios, further highlighting its robustness and practical applicability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. Dogu Araci. 2019. Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.
  2. Predicting responses to microblog posts. In proceedings of the 2012 conference of the north American chapter of the Association for Computational Linguistics: human language technologies, pages 602–606.
  3. Twitterbert: Framework for twitter sentiment analysis based on pre-trained language model representations. In Emerging Trends in Intelligent Computing and Informatics: Data Science, Intelligent Information Systems and Smart Computing 4, pages 428–437. Springer.
  4. Annye Braca and Pierpaolo Dondio. 2023. Persuasive communication systems: a machine learning approach to predict the effect of linguistic styles and persuasion techniques. Journal of Systems and Information Technology, (ahead-of-print).
  5. Hou Pong Chan and Irwin King. 2018. Thread popularity prediction and tracking with a permutation-invariant model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3392–3401. Association for Computational Linguistics.
  6. Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
  7. A survey on embedding dynamic graphs. ACM Comput. Surv., 55(2):10:1–10:37.
  8. Active prompting with chain-of-thought for large language models.
  9. Infosurgeon: Cross-media fine-grained information consistency checking for fake news detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1683–1698.
  10. Normsage: Multi-lingual multi-cultural norm discovery from conversations on-the-fly. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.
  11. NewsClaims: A new benchmark for claim detection from news with attribute knowledge. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6002–6018, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  12. Emotional influence prediction of news posts. In Twelfth International AAAI Conference on Web and Social Media.
  13. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056.
  14. Moral foundations theory. Atlas of moral psychology, 211.
  15. Lm-switch: Lightweight language model conditioning in word embedding space. arXiv preprint arXiv:2305.12798.
  16. Deep reinforcement learning with a combinatorial action space for predicting popular reddit threads. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1838–1848. The Association for Computational Linguistics.
  17. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
  18. Tiffany Hsu and Stuart A. Thompson. 2023. Disinformation researchers raise alarms about a.i. chatbots.
  19. Heterogeneous graph transformer. In Proceedings of the web conference 2020, pages 2704–2710.
  20. Zero-shot faithful factual error correction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 5660–5676. Association for Computational Linguistics.
  21. Manitweet: A new benchmark for identifying manipulation of news on social media. CoRR, abs/2305.14225.
  22. Sandeepa Kannangara. 2018. Mining twitter for fine-grained political opinion polarity classification, ideology detection and sarcasm detection. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 751–752.
  23. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916.
  24. Multi-view models for political ideology detection of news articles. arXiv preprint arXiv:1809.03485.
  25. Senti2pop: sentiment-aware topic popularity prediction on social media. In 2019 IEEE International conference on data mining (ICDM), pages 1174–1179. IEEE.
  26. Unsupervised belief representation learning with information-theoretic variational graph auto-encoders. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1728–1738.
  27. Defining a new nlp playground. ACL Findings.
  28. Kevin Hsin-Yih Lin and Hsin-Hsi Chen. 2008. Ranking reader emotions using pairwise loss minimization and emotional distribution regression. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 136–144, Honolulu, Hawaii. Association for Computational Linguistics.
  29. Emoticon smoothed language models for twitter sentiment analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 26, pages 1678–1684.
  30. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265.
  31. Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852.
  32. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  33. Partner personas generation for dialogue response generation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5200–5212.
  34. A survey on computational propaganda detection. arXiv preprint arXiv:2007.08024.
  35. The behaviors and attitudes of us adults on twitter.
  36. Semeval-2018 task 1: Affect in tweets. In Proceedings of the 12th international workshop on semantic evaluation, pages 1–17.
  37. Propaganda detection in text data based on nlp and machine learning. In MoMLeT+ DS, pages 132–144.
  38. Quinn Owen and Max Zahn. 2023. Avoiding potential ’extinction event’ from ai requires action, us official says.
  39. Pytorch: An imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703.
  40. Smartbook: Ai-assisted situation report generation. arXiv preprint arXiv:2303.14337.
  41. Shalom H Schwartz. 1992. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. In Advances in experimental social psychology, volume 25, pages 1–65. Elsevier.
  42. Measuring the effect of influential messages on varying personas. arXiv preprint arXiv:2305.16470.
  43. Incorporating task-specific concept knowledge into script learning. arXiv preprint arXiv:2209.00068.
  44. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008.
  45. Generating diversified comments via reader-aware topic modeling and saliency detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13988–13996.
  46. Sentiment forecasting in dialog. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2448–2458.
  47. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics.
  48. Cross-document misinformation detection based on event graph reasoning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 543–558, Seattle, United States. Association for Computational Linguistics.
  49. Personalized response generation via generative split memory network. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1956–1970, Online. Association for Computational Linguistics.
  50. Harnessing the power of llms in practice: A survey on chatgpt and beyond. arXiv preprint arXiv:2304.13712.
  51. Read, attend and comment: A deep architecture for automatic news comment generation.
  52. Shehel Yoosuf and Yin Yang. 2019. Fine-grained propaganda detection with fine-tuned bert. In Proceedings of the second workshop on natural language processing for internet freedom: censorship, disinformation, and propaganda, pages 87–91.
  53. Vibe: Topic-driven temporal adaptation for twitter classification. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), Singapore. Association for Computational Linguistics.
  54. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697–12706. PMLR.
  55. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625.
  56. Graph neural networks: A review of methods and applications. AI open, 1:57–81.
Citations (14)

Summary

We haven't generated a summary for this paper yet.