Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Balancing Information Perception with Yin-Yang: Agent-Based Information Neutrality Model for Recommendation Systems (2404.04906v1)

Published 7 Apr 2024 in cs.HC and cs.IR

Abstract: While preference-based recommendation algorithms effectively enhance user engagement by recommending personalized content, they often result in the creation of ``filter bubbles''. These bubbles restrict the range of information users interact with, inadvertently reinforcing their existing viewpoints. Previous research has focused on modifying these underlying algorithms to tackle this issue. Yet, approaches that maintain the integrity of the original algorithms remain largely unexplored. This paper introduces an Agent-based Information Neutrality model grounded in the Yin-Yang theory, namely, AbIN. This innovative approach targets the imbalance in information perception within existing recommendation systems. It is designed to integrate with these preference-based systems, ensuring the delivery of recommendations with neutral information. Our empirical evaluation of this model proved its efficacy, showcasing its capacity to expand information diversity while respecting user preferences. Consequently, AbIN emerges as an instrumental tool in mitigating the negative impact of filter bubbles on information consumption.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. L. Michiels, J. Leysen, A. Smets, and B. Goethals, “What are filter bubbles really? a review of the conceptual and empirical work,” in Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, 2022, pp. 274–279.
  2. C. Schumann, J. Foster, N. Mattei, and J. Dickerson, “We need fairness and explainability in algorithmic hiring,” in International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2020.
  3. Z. Liu, W. Xu, W. Zhang, and Q. Jiang, “An emotion-based personalized music recommendation framework for emotion improvement,” Information Processing & Management, vol. 60, no. 3, p. 103256, 2023.
  4. S. Flaxman, S. Goel, and J. M. Rao, “Filter bubbles, echo chambers, and online news consumption,” Public opinion quarterly, vol. 80, no. S1, pp. 298–320, 2016.
  5. W. Li, Q. Bai, T. D. Nguyen, and M. Zhang, “Agent-based influence maintenance in social networks,” in Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, 2017, pp. 1592–1594.
  6. A. Kandel and Y.-Q. Zhang, “Intrinsic mechanisms and application principles of general fuzzy logic through yin-yang analysis,” Information Sciences, vol. 106, no. 1-2, pp. 87–104, 1998.
  7. M. R. Fernandes, “Confirmation bias in social networks,” Mathematical Social Sciences, vol. 123, pp. 59–76, 2023.
  8. X. Li, X. Guo, and Z. Shi, “Bright sides and dark sides: Unveiling the double-edged sword effects of social networks,” Social Science & Medicine, vol. 329, 2023.
  9. Q. Guo, Z. Sun, J. Zhang, and Y.-L. Theng, “An attentional recurrent neural network for personalized next location recommendation,” in Proceedings of the AAAI Conference on artificial intelligence, vol. 34, no. 01, 2020, pp. 83–90.
  10. F. Wang, H. Zhu, G. Srivastava, S. Li, M. R. Khosravi, and L. Qi, “Robust collaborative filtering recommendation with user-item-trust records,” IEEE Transactions on Computational Social Systems, vol. 9, no. 4, pp. 986–996, 2021.
  11. S. Reddy, S. Nalluri, S. Kunisetti, S. Ashok, and B. Venkatesh, “Content-based movie recommendation system using genre correlation,” in Smart Intelligent Computing and Applications: Proceedings of the Second International Conference on SCI 2018, Volume 2.   Springer, 2019, pp. 391–397.
  12. A. L. Vilela, L. F. C. Pereira, L. Dias, H. E. Stanley, and L. R. d. Silva, “Majority-vote model with limited visibility: an investigation into filter bubbles,” Physica A: Statistical Mechanics and its Applications, vol. 563, 2021.
  13. D. Geschke, J. Lorenz, and P. Holtz, “The triple-filter bubble: using agent-based modeling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers,” British Journal of Social Psychology, vol. 58, pp. 129–149, 2019.
  14. Y. Hu, S. Wu, C. Jiang, W. Li, Q. Bai, and E. Roehrer, “Ai facilitated isolations? the impact of recommendation-based influence diffusion in human society,” in Proceedings of the 31st International Joint Conference on Artificial Intelligence, 2022, pp. 5080–5086.
  15. G. M. Lunardi, G. M. Machado, V. Maran, and J. P. M. de Oliveira, “A metric for filter bubble measurement in recommender algorithms considering the news domain,” Applied Soft Computing, vol. 97, 2020.
  16. H. Su, X. Shen, S. Zhao, Z. Xiao, P. Hu, R. Zhong, C. Niu, and J. Zhou, “Diversifying dialogue generation with non-conversational text,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7087–7097.
  17. Q. Grossetti, C. Du Mouza, N. Travers, and C. Constantin, “Reducing the filter bubble effect on twitter by considering communities for recommendations,” International Journal of Web Information Systems, vol. 17, no. 6, pp. 728–752, 2021.
  18. L. Yang, S. Wang, Y. Tao, J. Sun, X. Liu, P. S. Yu, and T. Wang, “Dgrec: Graph neural network for recommendation with diversified embedding generation,” in Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, 2023, pp. 661–669.
  19. Z. Li, Y. Dong, C. Gao, Y. Zhao, D. Li, J. Hao, K. Zhang, Y. Li, and Z. Wang, “Breaking filter bubble: A reinforcement learning framework of controllable recommender system,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 4041–4049.
  20. E. Million, “The hadamard product,” Course Notes, vol. 3, no. 6, pp. 1–7, 2007.
  21. K. Chen and R. Ma, “A mathematic expression of the genes of chinese traditional philosophy,” arXiv e-prints, pp. arXiv–1611, 2016.
  22. J. D. M.-W. C. Kenton and L. K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, 2019, p. 2.
  23. M. Behrendt and S. Harmeling, “Arguebert: How to improve bert embeddings for measuring the similarity of arguments,” in Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021), 2021, pp. 28–36.
  24. X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, “Neural graph collaborative filtering,” in Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, 2019, pp. 165–174.
  25. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “Lightgcn: Simplifying and powering graph convolution network for recommendation,” in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 639–648.
  26. X. Wang, H. Jin, A. Zhang, X. He, T. Xu, and T.-S. Chua, “Disentangled graph collaborative filtering,” in Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, 2020, pp. 1001–1010.
  27. C. Chen, M. Zhang, Y. Zhang, Y. Liu, and S. Ma, “Efficient neural matrix factorization without sampling for recommendation,” ACM Transactions on Information Systems (TOIS), vol. 38, no. 2, pp. 1–28, 2020.
  28. J. Wu, X. Wang, F. Feng, X. He, L. Chen, J. Lian, and X. Xie, “Self-supervised graph learning for recommendation,” in Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 2021, pp. 726–735.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mengyan Wang (3 papers)
  2. Yuxuan Hu (35 papers)
  3. Shiqing Wu (7 papers)
  4. Weihua Li (43 papers)
  5. Quan Bai (23 papers)
  6. Verica Rupar (2 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets