Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MEDIATE: Mutually Endorsed Distributed Incentive Acknowledgment Token Exchange (2404.03431v1)

Published 4 Apr 2024 in cs.MA

Abstract: Recent advances in multi-agent systems (MAS) have shown that incorporating peer incentivization (PI) mechanisms vastly improves cooperation. Especially in social dilemmas, communication between the agents helps to overcome sub-optimal Nash equilibria. However, incentivization tokens need to be carefully selected. Furthermore, real-world applications might yield increased privacy requirements and limited exchange. Therefore, we extend the PI protocol for mutual acknowledgment token exchange (MATE) and provide additional analysis on the impact of the chosen tokens. Building upon those insights, we propose mutually endorsed distributed incentive acknowledgment token exchange (MEDIATE), an extended PI architecture employing automatic token derivation via decentralized consensus. Empirical results show the stable agreement on appropriate tokens yielding superior performance compared to static tokens and state-of-the-art approaches in different social dilemma environments with various reward distributions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Agarap, A. F. Deep learning using rectified linear units (relu). arXiv preprint arXiv:1803.08375, 2018.
  2. Consensus in multi-agent systems: a review. Artificial Intelligence Review, 55(5):3897–3935, 2022.
  3. Axelrod, R. Effective choice in the prisoner’s dilemma. Journal of conflict resolution, 24(1):3–25, 1980.
  4. Adaptive mechanism design: Learning to promote cooperation. In 2020 International Joint Conference on Neural Networks (IJCNN), pp.  1–7. IEEE, 2020.
  5. Multi-agent reinforcement learning: An overview. Innovations in multi-agent systems and applications-1, pp.  183–221, 2010.
  6. When internet of things meets blockchain: Challenges in distributed consensus. IEEE Network, 33(6):133–139, 2019.
  7. Consensus decision making in animals. Trends in ecology & evolution, 20(8):449–456, 2005.
  8. Explicit and emergent cooperation schemes for search algorithms. In International Conference on Learning and Intelligent Optimization, pp.  95–109. Springer, 2007.
  9. Dawes, R. M. Social dilemmas. Annual review of psychology, 31(1):169–193, 1980.
  10. Learning reciprocity in complex sequential social dilemmas. arXiv preprint arXiv:1903.08082, 2019.
  11. Influence-based reinforcement learning for intrinsically-motivated agents. arXiv preprint arXiv:2108.12581, 2021.
  12. Adversarial attacks in consensus-based multi-agent reinforcement learning. In 2021 American Control Conference (ACC), pp.  3050–3055. IEEE, 2021.
  13. Learning with opponent-learning awareness. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp.  122–130, 2018.
  14. Cluster consensus in discrete-time networks of multiagents with inter-cluster nonidentical inputs. IEEE Transactions on Neural Networks and Learning Systems, 24(4):566–578, 2013.
  15. Inducing cooperation through reward reshaping based on peer evaluations in deep multi-agent reinforcement learning. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, pp.  520–528, 2020.
  16. Social influence as intrinsic motivation for multi-agent deep reinforcement learning. In International conference on machine learning, pp.  3040–3049. PMLR, 2019.
  17. Multi-agent system and reinforcement learning approach for distributed intelligence in a flexible smart manufacturing system. Journal of Manufacturing Systems, 57:440–450, 2020.
  18. Learning strategic value and cooperation in multi-player stochastic games through side payments. arXiv preprint arXiv:2303.05307, 2023.
  19. A comprehensive review of blockchain consensus mechanisms. IEEE Access, 9:43620–43652, 2021.
  20. The world of independent learners is not markovian. International Journal of Knowledge-based and Intelligent Engineering Systems, 15(1):55–64, 2011.
  21. Multi-agent reinforcement learning in sequential social dilemmas. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp.  464–473, 2017.
  22. Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv preprint arXiv:1707.01068, 2017.
  23. Stable opponent shaping in differentiable games. In International Conference on Learning Representations, 2019.
  24. Privacy-preserving distributed average consensus based on additive secret sharing. In 2019 27th European Signal Processing Conference (EUSIPCO), pp.  1–5. IEEE, 2019.
  25. A survey of the consensus for multi-agent systems. Systems Science & Control Engineering, 7:468 – 482, 2019.
  26. Littman, M. L. Value-function reinforcement learning in markov games. Cognitive systems research, 2(1):55–66, 2001.
  27. Gifting in multi-agent reinforcement learning. In Proceedings of the 19th International Conference on autonomous agents and multiagent systems, pp.  789–797, 2020.
  28. Lief: Learning to influence through evaluative feedback. In Adaptive and Learning Agents Workshop (AAMAS 2021), 2021.
  29. A survey of blockchain from the perspectives of applications, challenges, and opportunities. IEEE Access, 7:117134–117151, 2019.
  30. Noë, R. Cooperation experiments: coordination through communication versus acting apart together. Animal behaviour, 71(1):1–18, 2006.
  31. Consensus filters for sensor networks and distributed sensor fusion. In Proceedings of the 44th IEEE Conference on Decision and Control, pp.  6698–6703. IEEE, 2005.
  32. Artificial intelligence techniques in smart grid: A survey. Smart Cities, 4(2):548–568, 2021.
  33. A multi-agent reinforcement learning model of common-pool resource appropriation, 2017.
  34. Emergent cooperation from mutual acknowledgment exchange. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, pp.  1047–1055, 2022.
  35. A survey on intelligent transportation systems. Middle-East Journal of Scientific Research, 15(5):629–642, 2013.
  36. Monotonic value function factorisation for deep multi-agent reinforcement learning. The Journal of Machine Learning Research, 21(1):7234–7284, 2020.
  37. Russell, S. J. Artificial intelligence a modern approach. Pearson Education, Inc., 2010.
  38. A survey on consensus protocols in blockchain for iot networks. arXiv preprint arXiv:1809.05613, 2018.
  39. Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems, 37(1-2):147–166, 1996.
  40. A distributed consensus protocol for clock synchronization in wireless sensor network. In 2007 46th ieee conference on decision and control, pp.  2289–2294. IEEE, 2007.
  41. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In International conference on machine learning, pp.  5887–5896. PMLR, 2019.
  42. Value-decomposition networks for cooperative multi-agent learning based on team reward. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp.  2085–2087, 2018.
  43. Iot privacy and security: Challenges and solutions. Applied Sciences, 10(12):4102, 2020.
  44. Evolving intrinsic motivations for altruistic behavior. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp.  683–692, 2019.
  45. Learning to incentivize other learning agents. Advances in Neural Information Processing Systems, 33:15208–15219, 2020.
  46. Learning to share in multi-agent reinforcement learning. arXiv preprint arXiv:2112.08702, 2021.
  47. Distributed consensus filtering in sensor networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(6):1568–1577, 2009.

Summary

We haven't generated a summary for this paper yet.