Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anatomy of Neural Language Models (2401.03797v2)

Published 8 Jan 2024 in cs.CL and cs.LG

Abstract: The fields of generative AI and transfer learning have experienced remarkable advancements in recent years especially in the domain of NLP. Transformers have been at the heart of these advancements where the cutting-edge transformer-based LLMs (LMs) have led to new state-of-the-art results in a wide spectrum of applications. While the number of research works involving neural LMs is exponentially increasing, their vast majority are high-level and far from self-contained. Consequently, a deep understanding of the literature in this area is a tough task especially in the absence of a unified mathematical framework explaining the main types of neural LMs. We address the aforementioned problem in this tutorial where the objective is to explain neural LMs in a detailed, simplified and unambiguous mathematical framework accompanied by clear graphical illustrations. Concrete examples on widely used models like BERT and GPT2 are explored. Finally, since transformers pretrained on language-modeling-like tasks have been widely adopted in computer vision and time series applications, we briefly explore some examples of such solutions in order to enable readers to understand how transformers work in the aforementioned domains and compare this use with the original one in NLP.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Majd Saleh (2 papers)
  2. Stéphane Paquelet (11 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.