Robustness and fairness of LLM-driven agents in ABMs
Develop methods to ensure the robustness and fairness of large language model (LLM)-driven agents used in large-scale agent-based models by addressing inconsistent or biased outputs that may lead to unrealistic agent behaviors.
References
First, ensuring the robustness and fairness of LLM-driven agents remains an open challenge, as LLMs can produce inconsistent or biased outputs, potentially leading to unrealistic agent behaviors.
— On the limits of agency in agent-based models
(2409.10568 - Chopra et al., 14 Sep 2024) in Section 8, Discussion (Limitations)