Papers
Topics
Authors
Recent
Search
2000 character limit reached

$k$NN-Adapter: Efficient Domain Adaptation for Black-Box Language Models

Published 21 Feb 2023 in cs.CL | (2302.10879v1)

Abstract: Fine-tuning a LLM on a new domain is standard practice for domain adaptation. However, it can be infeasible when it comes to modern large-scale LLMs such as GPT-3, which can only be accessed through APIs, making it difficult to access the internal parameters of the model. In this paper, we propose $k$NN-Adapter, a method to effectively adapt these black-box LLMs to a new domain. The $k$NN-Adapter builds on top of the retrieval-augmented LLM, and adaptively learns to interpolate the output of the LLM with retrieval results from a datastore consisting of the target domain data. Our experiments on four different domains demonstrate that $k$NN-Adapter significantly improves perplexity, and works particularly well in settings with limited access to LLMs. Additionally, we show that $k$NN-Adapter is more effective than fine-tuning when the amount of training data is limited. We also release a dataset to encourage further study.

Citations (12)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.