Personas as a Way to Model Truthfulness in Language Models (2310.18168v5)
Abstract: LLMs are trained on vast amounts of text from the internet, which contains both factual and misleading information about the world. While unintuitive from a classic view of LMs, recent work has shown that the truth value of a statement can be elicited from the model's representations. This paper presents an explanation for why LMs appear to know the truth despite not being trained with truth labels. We hypothesize that the pretraining data is generated by groups of (un)truthful agents whose outputs share common features, and they form a (un)truthful persona. By training on this data, LMs can infer and represent the persona in its activation space. This allows the model to separate truth from falsehoods and controls the truthfulness of its generation. We show evidence for the persona hypothesis via two observations: (1) we can probe whether a model's answer will be truthful before it is generated; (2) finetuning a model on a set of facts improves its truthfulness on unseen topics. Next, using arithmetics as a synthetic environment, we show that structures of the pretraining data are crucial for the model to infer the truthful persona. Overall, our findings suggest that models can exploit hierarchical structures in the data to learn abstract concepts like truthfulness.
- Jacob Andreas. Language models as agent models. In Findings of the Association for Computational Linguistics: EMNLP 2022, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.findings-emnlp.423.
- A general language assistant as a laboratory for alignment. ArXiv, abs/2112.00861, 2021. URL https://api.semanticscholar.org/CorpusID:244799619.
- The internal state of an llm knows when its lying. ArXiv, abs/2304.13734, 2023. URL https://api.semanticscholar.org/CorpusID:258352729.
- Language models are few-shot learners. ArXiv, abs/2005.14165, 2020.
- Discovering latent knowledge in language models without supervision. ArXiv, abs/2212.03827, 2022.
- Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Conference on Empirical Methods in Natural Language Processing, 2022. URL https://api.semanticscholar.org/CorpusID:253107178.
- Marked personas: Using natural language prompts to measure stereotypes in language models. ArXiv, abs/2305.18189, 2023. URL https://api.semanticscholar.org/CorpusID:258960243.
- Palm: Scaling language modeling with pathways. ArXiv, abs/2204.02311, 2022.
- Dola: Decoding by contrasting layers improves factuality in large language models. ArXiv, abs/2309.03883, 2023. URL https://api.semanticscholar.org/CorpusID:261582463.
- Toxicity in chatgpt: Analyzing persona-assigned language models. ArXiv, abs/2304.05335, 2023. URL https://api.semanticscholar.org/CorpusID:258060002.
- Towards measuring the representation of subjective global opinions in language models. ArXiv, abs/2306.16388, 2023. URL https://api.semanticscholar.org/CorpusID:259275051.
- Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021. URL https://api.semanticscholar.org/CorpusID:235458009.
- TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601–1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147.
- (QA)22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT: Question answering with questionable assumptions. arXiv preprint arXiv:2212.10003, 2022.
- Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL https://api.semanticscholar.org/CorpusID:6628106.
- Linguistic properties of truthful response. ArXiv, abs/2305.15875, 2023. URL https://api.semanticscholar.org/CorpusID:258887816.
- Inference-time intervention: Eliciting truthful answers from a language model. ArXiv, abs/2306.03341, 2023.
- Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
- Webgpt: Browser-assisted question-answering with human feedback. ArXiv, abs/2112.09332, 2021. URL https://api.semanticscholar.org/CorpusID:245329531.
- Ms marco: A human-generated machine reading comprehension dataset. 2016.
- Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022. URL https://api.semanticscholar.org/CorpusID:246426909.
- Grokking: Generalization beyond overfitting on small algorithmic datasets. ArXiv, abs/2201.02177, 2022.
- Scaling language models: Methods, analysis & insights from training gopher. ArXiv, abs/2112.11446, 2021. URL https://api.semanticscholar.org/CorpusID:245353475.
- Personality traits in large language models. ArXiv, abs/2307.00184, 2023. URL https://api.semanticscholar.org/CorpusID:259317218.
- Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. ArXiv, abs/2206.04615, 2022. URL https://api.semanticscholar.org/CorpusID:249538544.
- Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
- Inverse scaling can become u-shaped. ArXiv, abs/2211.02011, 2022. URL https://api.semanticscholar.org/CorpusID:253265047.
- Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=92gvk82DE-.
- Nitish Joshi (13 papers)
- Javier Rando (21 papers)
- Abulhair Saparov (17 papers)
- Najoung Kim (28 papers)
- He He (71 papers)