Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comparison of Pre-trained Language Models for Turkish Address Parsing (2306.13947v1)

Published 24 Jun 2023 in cs.CL and cs.LG

Abstract: Transformer based pre-trained models such as BERT and its variants, which are trained on large corpora, have demonstrated tremendous success for NLP tasks. Most of academic works are based on the English language; however, the number of multilingual and language specific studies increase steadily. Furthermore, several studies claimed that language specific models outperform multilingual models in various tasks. Therefore, the community tends to train or fine-tune the models for the language of their case study, specifically. In this paper, we focus on Turkish maps data and thoroughly evaluate both multilingual and Turkish based BERT, DistilBERT, ELECTRA and RoBERTa. Besides, we also propose a MultiLayer Perceptron (MLP) for fine-tuning BERT in addition to the standard approach of one-layer fine-tuning. For the dataset, a mid-sized Address Parsing corpus taken with a relatively high quality is constructed. Conducted experiments on this dataset indicate that Turkish language specific models with MLP fine-tuning yields slightly better results when compared to the multilingual fine-tuned models. Moreover, visualization of address tokens' representations further indicates the effectiveness of BERT variants for classifying a variety of addresses.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Muhammed Cihat Ünal (1 paper)
  2. Aydın Gerek (2 papers)
  3. Betül Aygün (1 paper)
Citations (3)