Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Data Bootstrapping Recipe for Low Resource Multilingual Relation Classification (2110.09570v1)

Published 18 Oct 2021 in cs.CL

Abstract: Relation classification (sometimes called 'extraction') requires trustworthy datasets for fine-tuning LLMs, as well as for evaluation. Data collection is challenging for Indian languages, because they are syntactically and morphologically diverse, as well as different from resource-rich languages like English. Despite recent interest in deep generative models for Indian languages, relation classification is still not well served by public data sets. In response, we present IndoRE, a dataset with 21K entity and relation tagged gold sentences in three Indian languages, plus English. We start with a multilingual BERT (mBERT) based system that captures entity span positions and type information and provides competitive monolingual relation classification. Using this system, we explore and compare transfer mechanisms between languages. In particular, we study the accuracy efficiency tradeoff between expensive gold instances vs. translated and aligned 'silver' instances. We release the dataset for future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Arijit Nag (3 papers)
  2. Bidisha Samanta (14 papers)
  3. Animesh Mukherjee (154 papers)
  4. Niloy Ganguly (95 papers)
  5. Soumen Chakrabarti (52 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.