PoliFormer: Scaling On-Policy RL with Transformers Results in Masterful Navigators (2406.20083v1)
Abstract: We present PoliFormer (Policy Transformer), an RGB-only indoor navigation agent trained end-to-end with reinforcement learning at scale that generalizes to the real-world without adaptation despite being trained purely in simulation. PoliFormer uses a foundational vision transformer encoder with a causal transformer decoder enabling long-term memory and reasoning. It is trained for hundreds of millions of interactions across diverse environments, leveraging parallelized, multi-machine rollouts for efficient training with high throughput. PoliFormer is a masterful navigator, producing state-of-the-art results across two distinct embodiments, the LoCoBot and Stretch RE-1 robots, and four navigation benchmarks. It breaks through the plateaus of previous work, achieving an unprecedented 85.5% success rate in object goal navigation on the CHORES-S benchmark, a 28.5% absolute improvement. PoliFormer can also be trivially extended to a variety of downstream applications such as object tracking, multi-object navigation, and open-vocabulary navigation with no finetuning.
- Kuo-Hao Zeng (22 papers)
- Zichen Zhang (30 papers)
- Kiana Ehsani (31 papers)
- Rose Hendrix (12 papers)
- Jordi Salvador (15 papers)
- Alvaro Herrasti (11 papers)
- Ross Girshick (75 papers)
- Aniruddha Kembhavi (79 papers)
- Luca Weihs (46 papers)