2000 character limit reached
Domain-Specific Pretraining of Language Models: A Comparative Study in the Medical Field (2407.14076v2)
Published 19 Jul 2024 in cs.LG, cs.AI, and cs.CL
Abstract: There are many cases where LLMs are used for specific tasks in a single domain. These usually require less general, but more domain-specific knowledge. Highly capable, general-purpose state-of-the-art LLMs like GPT-4 or Claude-3-opus can often be used for such tasks, but they are very large and cannot be run locally, even if they were not proprietary. This can be a problem when working with sensitive data. This paper focuses on domain-specific and mixed-domain pretraining as potentially more efficient methods than general pretraining for specialized LLMs. We will take a look at work related to domain-specific pretraining, specifically in the medical area, and compare benchmark results of specialized LLMs to general-purpose LLMs.
- Tobias Kerner (1 paper)