Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Open Sentence Embeddings for Portuguese with the Serafim PT* encoders family (2407.19527v1)

Published 28 Jul 2024 in cs.CL

Abstract: Sentence encoder encode the semantics of their input, enabling key downstream applications such as classification, clustering, or retrieval. In this paper, we present Serafim PT*, a family of open-source sentence encoders for Portuguese with various sizes, suited to different hardware/compute budgets. Each model exhibits state-of-the-art performance and is made openly available under a permissive license, allowing its use for both commercial and research purposes. Besides the sentence encoders, this paper contributes a systematic study and lessons learned concerning the selection criteria of learning objectives and parameters that support top-performing encoders.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Luís Gomes (7 papers)
  2. António Branco (14 papers)
  3. João Silva (10 papers)
  4. João Rodrigues (17 papers)
  5. Rodrigo Santos (10 papers)

Summary

Open Sentence Embeddings for Portuguese with the Serafim PT* Encoders Family

The paper "Open Sentence Embeddings for Portuguese with the Serafim PT* Encoders Family" introduces a comprehensive family of open-source sentence encoders specifically designed for the Portuguese language, encompassing both European and Brazilian dialects. This work addresses the notable gap in language-specific NLP resources by presenting a high-performing suite of encoders suitable for key NLP tasks such as semantic text similarity (STS) and information retrieval (IR).

Overview and Context

In the domain of NLP, the use of sentence embeddings as vectorial representations is pivotal for tasks that require semantic understanding, such as classification, clustering, and retrieval. Despite the broader focus on English in NLP research, there remains a dearth of competitive sentence encoders specifically tailored for less-resourced languages, including Portuguese. Existing encoders for Portuguese have been suboptimal, necessitating the development of more robust models.

Contributions of the Paper

The primary contribution of this paper is the development of the Serafim encoders, which include model sizes with 100 million, 335 million, and 900 million parameters. These models, built upon state-of-the-art foundations like Albertina and BERTimbau, set new benchmarks for Portuguese sentence encoding tasks. The models are specifically fine-tuned for core STS and IR benchmarks, providing a considerable improvement over previous Portuguese-specific encoders.

The models demonstrate superior performance, improving on existing state-of-the-art models. For instance, Serafim encoders achieve a notable increase in performance across diverse STS datasets, with Spearman correlation improvements of 0.02 to 0.04 over prior models, and a mean reciprocal rank (MRR) surpassing 0.80 in IR tasks, outperforming both relevant previous encoders and those dedicated to English.

Methodological Insights

The paper describes a sequential training methodology combining supervised and unsupervised learning methods to optimize the encoders. Starting with unsupervised pre-training, the models utilize Contrastive Tension Loss with in-batch negatives, before moving on to supervised learning with loss functions such as CoSENT and AnglE. These techniques ensure the encoders excel not just in grasping linguistic nuances, but also in adapting to variant-specific peculiarities.

Critically, the authors highlight the importance of comprehensive training data, using datasets like ASSIN, ASSIN2, and IRIS-STS to finely tune the models. The paper also benefits from extensive high-quality raw text corpora from OPUS for unsupervised training phases.

Implications and Future Directions

The release of Serafim models is a significant step towards bridging the linguistic gap in NLP, offering resources for practical application in semantic tasks for Portuguese. The models' open-source nature will facilitate research and commercial uses, potentially stimulating further scientific exploration and industrial deployment.

Looking forward, there exists the potential to exploit data augmentation techniques to supplement the relatively limited Portuguese datasets, thereby potentially enhancing performance further. As future research explores cross-lingual transfer and adaptation, such efforts may result in even more refined models capable of optimizing performance across other dialects and languages.

In conclusion, the Serafim PT* family represents a valuable contribution to Portuguese NLP resources, offering robust and versatile tools for sentence encoding across semantic and retrieval tasks. The openness of these models paves the way for continued innovation and improvement in the field.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Youtube Logo Streamline Icon: https://streamlinehq.com