Dice Question Streamline Icon: https://streamlinehq.com

Robustness and fairness of LLM-driven agents in ABMs

Develop methods to ensure the robustness and fairness of large language model (LLM)-driven agents used in large-scale agent-based models by addressing inconsistent or biased outputs that may lead to unrealistic agent behaviors.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper integrates LLMs into agent-based models via LLM archetypes to enable adaptive behavior at population scale. While this approach improves expressiveness and scalability, the authors note that LLM outputs can be inconsistent or biased, which risks producing unrealistic agent decisions that degrade model validity.

Ensuring robustness and fairness is therefore crucial for the credibility and policy relevance of ABM outputs, especially when simulating millions of agents whose behaviors are informed by LLM-generated decisions.

References

First, ensuring the robustness and fairness of LLM-driven agents remains an open challenge, as LLMs can produce inconsistent or biased outputs, potentially leading to unrealistic agent behaviors.

On the limits of agency in agent-based models (2409.10568 - Chopra et al., 14 Sep 2024) in Section 8, Discussion (Limitations)