Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey (2305.18703v7)

Published 30 May 2023 in cs.CL and cs.AI

Abstract: LLMs have significantly advanced the field of NLP, providing a highly useful, task-agnostic foundation for a wide range of applications. However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles, caused by the heterogeneity of domain data, the sophistication of domain knowledge, the uniqueness of domain objectives, and the diversity of the constraints (e.g., various social norms, cultural conformity, religious beliefs, and ethical standards in the domain applications). Domain specification techniques are key to make LLMs disruptive in many applications. Specifically, to solve these hurdles, there has been a notable increase in research and practices conducted in recent years on the domain specialization of LLMs. This emerging field of study, with its substantial potential for impact, necessitates a comprehensive and systematic review to better summarize and guide ongoing work in this area. In this article, we present a comprehensive survey on domain specification techniques for LLMs, an emerging direction critical for LLM applications. First, we propose a systematic taxonomy that categorizes the LLM domain-specialization techniques based on the accessibility to LLMs and summarizes the framework for all the subcategories as well as their relations and differences to each other. Second, we present an extensive taxonomy of critical application domains that can benefit dramatically from specialized LLMs, discussing their practical significance and open challenges. Last, we offer our insights into the current research status and future trends in this area.

Domain Specialization as the Key to Make LLMs Disruptive: A Comprehensive Survey

This paper presents a thorough survey of approaches and techniques developed to adapt LLMs for domain specialization, aiming to overcome the challenges associated with applying generic LLMs to specific domain tasks. The challenges explored are attributed to the heterogeneity of domain data, the intricacy of domain knowledge, unique domain objectives, and diverse constraints such as cultural and ethical norms, which inhibit the direct application of LLMs to domain-specific problems.

The authors propose a comprehensive taxonomy of techniques for domain specialization, categorized based on the accessibility to the LLMs: black-box, grey-box, and white-box methods. This classification assists in systematically organizing the techniques by the level of model access required, from no access (black-box) to full access (white-box).

  1. External Augmentation (Black-Box Approaches): This category capitalizes on enriching LLMs externally without altering internal parameters. Techniques include the augmentation with explicit domain knowledge by leveraging external data sources or domain-specific knowledge bases and implicit knowledge via memory augmentation techniques. The role of domain tools in supplementing LLM performance through external APIs is also explored.
  2. Prompt Crafting (Grey-Box Approaches): Techniques in this category involve designing prompts to instruct LLMs, allowing them to utilize domain knowledge effectively. The methods are further divided into discrete and continuous prompts, where model guidance varies in formality from natural language instructions to learnable embeddings.
  3. Model Fine-Tuning (White-Box Approaches): This approach requires direct access to LLM parameters. It includes techniques like adapter-based fine-tuning, where additional layers or modules are trained for domain tasks without full model re-training, and task-oriented fine-tuning that targets specific model parameters to optimize for domain-specific objectives.

The survey underscores the potential of these techniques to transform LLMs into specialized tools capable of solving domain-specific problems. It highlights the significant advantages and limitations posed by each method, thereby providing a guide for choosing appropriate domain specialization strategies.

The paper also explores applications across diverse fields, including biomedicine, finance, law, and natural sciences, demonstrating the significance and impact of specialized LLMs in enhancing task performance. The discussion extends to the implications of LLM domain specialization on theoretical and practical AI advancements.

The authors speculate on future research avenues, emphasizing the evolution of hybrid approaches that could integrate multiple specialization techniques, enhancing adaptability to new domains. The paper suggests that future developments may focus on the seamless integration of domain-specific knowledge, broader automation in prompt and instruction crafting, and improved interaction interfaces with domain tools.

In conclusion, the survey provides an insightful mapping of the current state of domain specialization techniques for LLMs, charting a course for future advancements that harness domain specificity to effectively leverage LLMs' capabilities across a host of specialized applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (24)
  1. Chen Ling (65 papers)
  2. Xujiang Zhao (26 papers)
  3. Jiaying Lu (22 papers)
  4. Chengyuan Deng (18 papers)
  5. Can Zheng (9 papers)
  6. Junxiang Wang (35 papers)
  7. Tanmoy Chowdhury (9 papers)
  8. Yun Li (154 papers)
  9. Hejie Cui (33 papers)
  10. Xuchao Zhang (44 papers)
  11. Tianjiao Zhao (2 papers)
  12. Amit Panalkar (1 paper)
  13. Wei Cheng (175 papers)
  14. Haoyu Wang (309 papers)
  15. Yanchi Liu (41 papers)
  16. Zhengzhang Chen (32 papers)
  17. Haifeng Chen (99 papers)
  18. Chris White (7 papers)
  19. Quanquan Gu (198 papers)
  20. Jian Pei (104 papers)
Citations (97)
Youtube Logo Streamline Icon: https://streamlinehq.com