OpenAI aims to make artificial general intelligence (AGI) aligned with human values and follow human intent through iterative, empirical research.
Their approach involves training AI systems using human feedback, assisting human evaluation, and doing alignment research to build AI that can make faster and better alignment research progress than humans.
Alignment research: Research aimed at making artificial general intelligence (AGI) align with human values and follow human intent.
Training AI using human feedback: Teaching AI systems to learn from human input and improve their alignment with human values.
Assisting human evaluation: Training AI systems to help humans assess AI performance and alignment with human intent.
Doing alignment research: Developing AI systems that can conduct alignment research, helping to create better alignment techniques and solve alignment problems.
Artificial general intelligence (AGI): AI systems that possess human-level intelligence and can perform tasks that normally require human intelligence.