Bias control in large language models
Develop robust methods to control and mitigate negative biases in large language models in order to improve safety and reliability across applications that rely on these models.
References
Stricter filtering of training content and better transparency into a training sets data will provide better safety but bias-control is likely to be a longstanding open problem of LLMs.
— Evolving Code with A Large Language Model
(2401.07102 - Hemberg et al., 13 Jan 2024) in Section 2, Background: Large Language Models (Fourth paragraph on bias)