Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Energy Efficient In-memory Hyperdimensional Encoding for Spatio-temporal Signal Processing (2106.11654v1)

Published 22 Jun 2021 in cs.ET

Abstract: The emerging brain-inspired computing paradigm known as hyperdimensional computing (HDC) has been proven to provide a lightweight learning framework for various cognitive tasks compared to the widely used deep learning-based approaches. Spatio-temporal (ST) signal processing, which encompasses biosignals such as electromyography (EMG) and electroencephalography (EEG), is one family of applications that could benefit from an HDC-based learning framework. At the core of HDC lie manipulations and comparisons of large bit patterns, which are inherently ill-suited to conventional computing platforms based on the von-Neumann architecture. In this work, we propose an architecture for ST signal processing within the HDC framework using predominantly in-memory compute arrays. In particular, we introduce a methodology for the in-memory hyperdimensional encoding of ST data to be used together with an in-memory associative search module. We show that the in-memory HDC encoder for ST signals offers at least 1.80x energy efficiency gains, 3.36x area gains, as well as 9.74x throughput gains compared with a dedicated digital hardware implementation. At the same time it achieves a peak classification accuracy within 0.04% of that of the baseline HDC framework.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Geethan Karunaratne (25 papers)
  2. Manuel Le Gallo (33 papers)
  3. Michael Hersche (29 papers)
  4. Giovanni Cherubini (12 papers)
  5. Luca Benini (362 papers)
  6. Abu Sebastian (67 papers)
  7. Abbas Rahimi (44 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.