Summary: Researchers from the Stanford Center for Research on Foundation Models have released an open-source language model called Alpaca, which is fine-tuned from Meta's LLaMA 7B model and trained on 52,000 instruction-following demonstrations generated in the style of self-instruct using OpenAI's text-davinci-003. The release of Alpaca is intended to enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models.
Top Tweets

Comments ()

Log in or Sign Up to comment.