Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All (2211.15199v2)

Published 28 Nov 2022 in cs.CL

Abstract: We present a new pre-trained LLM (PLM) for modern Hebrew, termed AlephBERTGimmel, which employs a much larger vocabulary (128K items) than standard Hebrew PLMs before. We perform a contrastive analysis of this model against all previous Hebrew PLMs (mBERT, heBERT, AlephBERT) and assess the effects of larger vocabularies on task performance. Our experiments show that larger vocabularies lead to fewer splits, and that reducing splits is better for model performance, across different tasks. All in all this new model achieves new SOTA on all available Hebrew benchmarks, including Morphological Segmentation, POS Tagging, Full Morphological Analysis, NER, and Sentiment Analysis. Subsequently we advocate for PLMs that are larger not only in terms of number of layers or training data, but also in terms of their vocabulary. We release the new model publicly for unrestricted use.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Eylon Gueta (2 papers)
  2. Avi Shmidman (13 papers)
  3. Shaltiel Shmidman (10 papers)
  4. Cheyn Shmuel Shmidman (3 papers)
  5. Joshua Guedalia (3 papers)
  6. Moshe Koppel (16 papers)
  7. Dan Bareket (6 papers)
  8. Amit Seker (4 papers)
  9. Reut Tsarfaty (54 papers)
Citations (14)