OpenAI has created a hazard analysis framework to assess the safety risks of large language models (LLMs) like Codex in various aspects.
The framework evaluates advanced code generation techniques based on their complexity and expressivity, and their capability to understand and execute prompts compared to human ability.