Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WebGPT: Browser-assisted question-answering with human feedback (2112.09332v3)

Published 17 Dec 2021 in cs.CL, cs.AI, and cs.LG

Abstract: We fine-tune GPT-3 to answer long-form questions using a text-based web-browsing environment, which allows the model to search and navigate the web. By setting up the task so that it can be performed by humans, we are able to train models on the task using imitation learning, and then optimize answer quality with human feedback. To make human evaluation of factual accuracy easier, models must collect references while browsing in support of their answers. We train and evaluate our models on ELI5, a dataset of questions asked by Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior cloning, and then performing rejection sampling against a reward model trained to predict human preferences. This model's answers are preferred by humans 56% of the time to those of our human demonstrators, and 69% of the time to the highest-voted answer from Reddit.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (18)
  1. Reiichiro Nakano (5 papers)
  2. Jacob Hilton (18 papers)
  3. Suchir Balaji (4 papers)
  4. Jeff Wu (11 papers)
  5. Long Ouyang (9 papers)
  6. Christina Kim (2 papers)
  7. Christopher Hesse (9 papers)
  8. Shantanu Jain (16 papers)
  9. Vineet Kosaraju (9 papers)
  10. William Saunders (9 papers)
  11. Xu Jiang (21 papers)
  12. Karl Cobbe (9 papers)
  13. Tyna Eloundou (9 papers)
  14. Gretchen Krueger (11 papers)
  15. Kevin Button (2 papers)
  16. Matthew Knight (6 papers)
  17. Benjamin Chess (3 papers)
  18. John Schulman (43 papers)
Citations (1,044)

Summary

WebGPT: Browser-Assisted Question-Answering with Human Feedback

The paper presents WebGPT, a novel approach to improving long-form question-answering (LFQA) by leveraging a LLM fine-tuned to interact within a text-based web-browsing environment. This paper outlines how the integration of web-based information retrieval and subsequent refinement through human feedback can significantly enhance the LFQA capabilities of a pre-trained model, specifically GPT-3.

Introduction and Motivation

Current LFQA systems have shown limitations in generating high-quality answers, primarily due to challenges in retrieving relevant information and synthesizing responses. Previous methods have often succeeded in either information retrieval or synthesis but have struggled to integrate these components effectively. WebGPT seeks to address these challenges by utilizing a familiar web-search API (Bing) for document retrieval and fine-tuning GPT-3 for synthesis. The key innovation of WebGPT lies in its environment design and training framework, which integrates imitation learning, reinforcement learning (RL), and human feedback to enhance accuracy and coherence in responses.

Environment Design

At the core of WebGPT is a custom-built text-based web-browsing environment where a model interacts with web pages to retrieve and synthesize information. The environment includes capabilities for performing search queries, navigating through search results, selecting relevant links, quoting text, and formulating final answers. Human demonstrators initially perform these tasks in a graphical interface, creating a dataset of demonstrations that the model uses for behavior cloning (BC).

Training Methodologies

The training process for WebGPT involves several stages:

  1. Behavior Cloning (BC): The model undergoes supervised fine-tuning based on the demonstrations provided by human users interacting with the web-browsing environment. This stage ensures the model can mimic human browsing behavior.
  2. Reward Modeling (RM): A reward model is trained using comparisons between pairs of model-generated answers. Human labelers provide preference judgments, creating a dataset that quantifies human preferences. This reward model predicts these preferences, allowing subsequent optimization.
  3. Reinforcement Learning (RL): Leveraging Proximal Policy Optimization (PPO), the model further fine-tunes its browsing and answering capabilities. The reward model score at the end of episodes, combined with a KL-divergence penalty, guides the optimization.
  4. Rejection Sampling (Best-of-n): This technique involves sampling multiple answers and selecting the highest-scoring one according to the trained reward model. This procedure requires no further training but uses additional inference-time compute.

Evaluation and Results

WebGPT demonstrates substantial improvements in two key evaluations on the ELI5 dataset:

  1. Comparison with Human Demonstrators: The finest WebGPT model's answers are preferred 56% of the time over those written by human demonstrators, suggesting competitive or superior performance to humans when using the designed browsing environment.
  2. Comparison with ELI5 Reddit Answers: When compared with the top-voted answers from Reddit, WebGPT's best model generates preferred answers 69% of the time, significantly surpassing previous benchmarks.

The evaluation also extends to TruthfulQA, where the WebGPT models outperform base GPT-3 models, particularly in balancing truthfulness and informativeness, thus indicating an enhanced capability in handling adversarial questions.

Implications and Future Directions

The enhancements in LFQA demonstrated by WebGPT's approach have significant implications. Practically, such systems can provide more accurate and referenced information, which is crucial for applications requiring reliable automated responses. Theoretically, the integration of human feedback into training paradigms marks a promising direction in improving model interpretability and aligning outputs with human evaluative standards.

Speculation on Future Developments

Future research can build on the findings of WebGPT by exploring:

  • Adversarial Training: Incorporating adversarially selected questions to further enhance the robustness of information retrieval and synthesis.
  • Exploration in RL: Refining exploration strategies in RL to better align with human evaluative metrics and further reduce overoptimization risks.
  • Cross-disciplinary Criteria: Developing more robust and epistemically sound factual accuracy criteria to guide training and evaluation.

In summary, the WebGPT paper details a significant advancement in LFQA by demonstrating how a synergistic approach combining web-based retrieval, GPT-3 fine-tuning, and extensive human feedback can produce human-competitive answers. The paper not only marks progress in practical AI capabilities but also opens avenues for more sophisticated and reliable AI-driven information systems.

Youtube Logo Streamline Icon: https://streamlinehq.com