Reducing Selection Bias in Large Language Models (2402.01740v3)
Abstract: LLMs like gpt-3.5-turbo-0613 and claude-instant-1.2 are vital in interpreting and executing semantic tasks. Unfortunately, these models' inherent biases adversely affect their performance Particularly affected is object selection from lists; a fundamental operation in digital navigation and decision-making. This research critically examines these biases and quantifies the effects on a representative list selection task. To explore these biases, we experiment manipulating temperature, list length, object identity, object type, prompt complexity, and model. We isolated and measured the influence of the biases on selection behavior. Our findings show that bias structure is strongly dependent on the model, with object type modulating the magnitude of the effect. With a strong primacy effect, causing the first objects in a list to be disproportionately represented in outputs. The usage of guard rails, a prompt engineering method of ensuring a response structure, increases bias and decreases instruction adherence when to a selection task. The bias is ablated when the guard rail step is separated from the list sampling step, lowering the complexity of each individual task. We provide LLM applications and theoretically suggest that LLMs experience a form of cognitive load that is compensated for with bias.
- Abdelkader, Hala, Mohamed Abdelrazek, Scott Barnett, Jean-Guy Schneider, Priya Rani, and Rajesh Vasa. 2024. “ML-On-Rails: Safeguarding Machine Learning Models in Software Systems A Case Study.” arXiv. https://doi.org/10.48550/arXiv.2401.06513.
- Allred, Sarah R., L. Elizabeth Crawford, Sean Duffy, and John Smith. 2016. “Working Memory and Spatial Judgments: Cognitive Load Increases the Central Tendency Bias.” Psychon Bull Rev 23 (6): 1825–31. https://doi.org/10.3758/s13423-016-1039-0.
- Balch, William R. 1989. “Item Order Affects Performance on Multiple-Choice Exams.” Teaching of Psychology 16 (2): 75–77. https://doi.org/10.1207/s15328023top1602_9.
- Chang, Cheng-Shang. 2023. “A Simple Explanation for the Phase Transition in Large Language Models with List Decoding.” arXiv. https://doi.org/10.48550/arXiv.2303.13112.
- de Jong, Ton. 2010. “Cognitive Load Theory, Educational Research, and Instructional Design: Some Food for Thought.” Instr Sci 38 (2): 105–34. https://doi.org/10.1007/s11251-009-9110-0.
- “Guardrails AI | Your Enterprise AI Needs Guardrails.” n.d. https://guardrailsai.com/docs/. Accessed January 28, 2024.
- Han, Ridong, Tao Peng, Chaohao Yang, Benyou Wang, Lu Liu, and Xiang Wan. 2023. “Is Information Extraction Solved by ChatGPT? An Analysis of Performance, Evaluation Criteria, Robustness and Errors.” arXiv. https://doi.org/10.48550/arXiv.2305.14450.
- Kirk, Hannah Rose, Yennie Jun, Filippo Volpin, Haider Iqbal, Elias Benussi, Frederic Dreyer, Aleksandar Shtedritski, and Yuki Asano. 2021. “Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models.” In Advances in Neural Information Processing Systems, 34:2611–24. Curran Associates, Inc.
- Li, Cong. 2010. “Primacy Effect or Recency Effect? A Long-Term Memory Test of Super Bowl Commercials.” Journal of Consumer Behaviour 9 (1): 32–44. https://doi.org/10.1002/cb.291.
- Liang, Paul Pu, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2021. “Towards Understanding and Mitigating Social Biases in Language Models.” In Proceedings of the 38th International Conference on Machine Learning, 6565–76. PMLR.
- Liu, Nelson F., Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. “Lost in the Middle: How Language Models Use Long Contexts.” arXiv. https://doi.org/10.48550/arXiv.2307.03172.
- Nauts, Sanne, Oliver Langner, Inge Huijsmans, Roos Vonk, and Daniël H. J. Wigboldus. 2014. “Forming Impressions of Personality.” Social Psychology 45 (3): 153–63. https://doi.org/10.1027/1864-9335/a000179.
- Paas, Fred, and Jeroen J. G. van Merriënboer. 2020. “Cognitive-Load Theory: Methods to Manage Working Memory Load in the Learning of Complex Tasks.” Curr Dir Psychol Sci 29 (4): 394–98. https://doi.org/10.1177/0963721420922183.
- Rebedea, Traian, Razvan Dinu, Makesh Sreedhar, Christopher Parisien, and Jonathan Cohen. 2023. “NeMo Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails.” arXiv. https://doi.org/10.48550/arXiv.2310.10501.
- Shankar, Shreya, Haotian Li, Parth Asawa, Madelon Hulsebos, Yiming Lin, J. D. Zamfirescu-Pereira, Harrison Chase, Will Fu-Hinthorn, Aditya G. Parameswaran, and Eugene Wu. 2024. “SPADE: Synthesizing Assertions for Large Language Model Pipelines.” arXiv. https://doi.org/10.48550/arXiv.2401.03038.
- Touileb, Samia, Lilja Øvrelid, and Erik Velldal. 2022. “Occupational Biases in Norwegian and Multilingual Language Models.” In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), 200–211. Seattle, Washington: Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.gebnlp-1.21.
- Wang, Yiwei, Yujun Cai, Muhao Chen, Yuxuan Liang, and Bryan Hooi. 2023. “Primacy Effect of ChatGPT.” arXiv. https://arxiv.org/abs/2310.13206.
- Wolfe, Robert, and Aylin Caliskan. 2021. “Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models.” arXiv. https://doi.org/10.48550/arXiv.2110.00672.
- Xu, Nan, Fei Wang, Ben Zhou, Bang Zheng Li, Chaowei Xiao, and Muhao Chen. 2023. “Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking.” arXiv. https://doi.org/10.48550/arXiv.2311.09827.
- Zheng, Chujie, Hao Zhou, Fandong Meng, Jie Zhou, and Minlie Huang. 2023. “Large Language Models Are Not Robust Multiple Choice Selectors.” arXiv. https://arxiv.org/abs/2309.03882.
- J. E. Eicher (1 paper)
- R. F. Irgolič (1 paper)