Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure, which is a concept OpenAI must consider when optimizing objectives that are difficult or costly to measure.
OpenAI uses proxy objectives, such as a reward model, to optimize complex objectives like helpfulness and factual accuracy, and employs best-of-n sampling as a simple method to optimize the proxy objective.