Optimal Structuring of AI Red-Teaming Activities
Develop structured procedures for AI red-teaming of generative AI systems that maximize the likelihood of discovering flaws and vulnerabilities.
References
For example, the definition offered by the presidential executive order leaves the following key questions unanswered: How should the activity be structured to maximize the likelihood of finding such flaws and vulnerabilities?
— Red-Teaming for Generative AI: Silver Bullet or Security Theater?
(2401.15897 - Feffer et al., 29 Jan 2024) in Section 1 Introduction