Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Masked ELMo: An evolution of ELMo towards fully contextual RNN language models (2010.04302v1)

Published 8 Oct 2020 in cs.CL and cs.LG

Abstract: This paper presents Masked ELMo, a new RNN-based model for LLM pre-training, evolved from the ELMo LLM. Contrary to ELMo which only uses independent left-to-right and right-to-left contexts, Masked ELMo learns fully bidirectional word representations. To achieve this, we use the same Masked LLM objective as BERT. Additionally, thanks to optimizations on the LSTM neuron, the integration of mask accumulation and bidirectional truncated backpropagation through time, we have increased the training speed of the model substantially. All these improvements make it possible to pre-train a better LLM than ELMo while maintaining a low computational cost. We evaluate Masked ELMo by comparing it to ELMo within the same protocol on the GLUE benchmark, where our model outperforms significantly ELMo and is competitive with transformer approaches.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Gregory Senay (3 papers)
  2. Emmanuelle Salin (2 papers)
Citations (2)