Papers
Topics
Authors
Recent
Search
2000 character limit reached

NPE: An FPGA-based Overlay Processor for Natural Language Processing

Published 13 Apr 2021 in cs.AR | (2104.06535v1)

Abstract: In recent years, transformer-based models have shown state-of-the-art results for NLP. In particular, the introduction of the BERT LLM brought with it breakthroughs in tasks such as question answering and natural language inference, advancing applications that allow humans to interact naturally with embedded devices. FPGA-based overlay processors have been shown as effective solutions for edge image and video processing applications, which mostly rely on low precision linear matrix operations. In contrast, transformer-based NLP techniques employ a variety of higher precision nonlinear operations with significantly higher frequency. We present NPE, an FPGA-based overlay processor that can efficiently execute a variety of NLP models. NPE offers software-like programmability to the end user and, unlike FPGA designs that implement specialized accelerators for each nonlinear function, can be upgraded for future NLP models without requiring reconfiguration. We demonstrate that NPE can meet real-time conversational AI latency targets for the BERT LLM with $4\times$ lower power than CPUs and $6\times$ lower power than GPUs. We also show NPE uses $3\times$ fewer FPGA resources relative to comparable BERT network-specific accelerators in the literature. NPE provides a cost-effective and power-efficient FPGA-based solution for Natural Language Processing at the edge.

Citations (50)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.