Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain-Specific Pretraining of Language Models: A Comparative Study in the Medical Field (2407.14076v2)

Published 19 Jul 2024 in cs.LG, cs.AI, and cs.CL

Abstract: There are many cases where LLMs are used for specific tasks in a single domain. These usually require less general, but more domain-specific knowledge. Highly capable, general-purpose state-of-the-art LLMs like GPT-4 or Claude-3-opus can often be used for such tasks, but they are very large and cannot be run locally, even if they were not proprietary. This can be a problem when working with sensitive data. This paper focuses on domain-specific and mixed-domain pretraining as potentially more efficient methods than general pretraining for specialized LLMs. We will take a look at work related to domain-specific pretraining, specifically in the medical area, and compare benchmark results of specialized LLMs to general-purpose LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Tobias Kerner (1 paper)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets