Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

lamBERT: Language and Action Learning Using Multimodal BERT (2004.07093v1)

Published 15 Apr 2020 in cs.LG, cs.CL, and stat.ML

Abstract: Recently, the bidirectional encoder representations from transformers (BERT) model has attracted much attention in the field of natural language processing, owing to its high performance in language understanding-related tasks. The BERT model learns language representation that can be adapted to various tasks via pre-training using a large corpus in an unsupervised manner. This study proposes the language and action learning using multimodal BERT (lamBERT) model that enables the learning of language and actions by 1) extending the BERT model to multimodal representation and 2) integrating it with reinforcement learning. To verify the proposed model, an experiment is conducted in a grid environment that requires language understanding for the agent to act properly. As a result, the lamBERT model obtained higher rewards in multitask settings and transfer settings when compared to other models, such as the convolutional neural network-based model and the lamBERT model without pre-training.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kazuki Miyazawa (5 papers)
  2. Tatsuya Aoki (5 papers)
  3. Takato Horii (15 papers)
  4. Takayuki Nagai (23 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.