Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Supervised Contextual Embeddings for Transfer Learning in Natural Language Processing Tasks (1906.12039v1)

Published 28 Jun 2019 in cs.CL and cs.LG

Abstract: Pre-trained word embeddings are the primary method for transfer learning in several NLP tasks. Recent works have focused on using unsupervised techniques such as LLMing to obtain these embeddings. In contrast, this work focuses on extracting representations from multiple pre-trained supervised models, which enriches word embeddings with task and domain specific knowledge. Experiments performed in cross-task, cross-domain and cross-lingual settings indicate that such supervised embeddings are helpful, especially in the low-resource setting, but the extent of gains is dependent on the nature of the task and domain. We make our code publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mihir Kale (18 papers)
  2. Aditya Siddhant (22 papers)
  3. Sreyashi Nag (16 papers)
  4. Radhika Parik (2 papers)
  5. Matthias Grabmair (33 papers)
  6. Anthony Tomasic (8 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.