2000 character limit reached
Doppelgänger's Watch: A Split Objective Approach to Large Language Models
Published 9 Sep 2024 in cs.CL and cs.AI | (2409.06107v1)
Abstract: In this paper, we investigate the problem of "generation supervision" in LLMs, and present a novel bicameral architecture to separate supervision signals from their core capability, helpfulness. Doppelg\"anger, a new module parallel to the underlying LLM, supervises the generation of each token, and learns to concurrently predict the supervision score(s) of the sequences up to and including each token. In this work, we present the theoretical findings, and leave the report on experimental results to a forthcoming publication.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.