Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

In-memory hyperdimensional computing (1906.01548v2)

Published 4 Jun 2019 in cs.ET, cs.AI, and physics.app-ph

Abstract: Hyperdimensional computing (HDC) is an emerging computational framework that takes inspiration from attributes of neuronal circuits such as hyperdimensionality, fully distributed holographic representation, and (pseudo)randomness. When employed for machine learning tasks such as learning and classification, HDC involves manipulation and comparison of large patterns within memory. Moreover, a key attribute of HDC is its robustness to the imperfections associated with the computational substrates on which it is implemented. It is therefore particularly amenable to emerging non-von Neumann paradigms such as in-memory computing, where the physical attributes of nanoscale memristive devices are exploited to perform computation in place. Here, we present a complete in-memory HDC system that achieves a near optimum trade-off between design complexity and classification accuracy based on three prototypical HDC related learning tasks, namely, language classification, news classification, and hand gesture recognition from electromyography signals. Comparable accuracies to software implementations are demonstrated, experimentally, using 760,000 phase-change memory devices performing analog in-memory computing.

Citations (232)

Summary

  • The paper introduces an in-memory HDC system that integrates an encoder and an associative memory search engine using nanoscale PCM devices.
  • It demonstrates >6× energy savings and comparable accuracy on classification tasks such as language processing and hand gesture recognition.
  • The work employs a 2-minterm approximation and coarse-grained randomization to mitigate variability in memristive crossbar arrays.

In-Memory Hyperdimensional Computing: A Comprehensive Approach

The paper presents a detailed exploration of implementing Hyperdimensional Computing (HDC) using in-memory computing paradigms. This work is significant in that it proposes an integrated architecture that exploits the non-von Neumann computing benefits of emerging nanoscale memristive devices, particularly phase-change memory (PCM), achieving efficient machine learning hardware. Utilizing concepts inspired by properties of biological systems, such as hyperdimensionality and random distribution, the authors address challenges associated with traditional von Neumann architectures, achieving a balance between energy efficiency and computational accuracy.

Central Contribution

The core contribution of the paper is the design and experimental validation of a complete in-memory HDC system. This system comprises two key components: an encoder and an associative memory (AM) search engine. A significant finding is that this system maintains comparable accuracy with software alternatives on learning tasks such as language and news classification, as well as electromyography-based hand gesture recognition.

Technical Approach

The proposed architecture utilizes memristive crossbar arrays to actualize both the encoder and AM search functionalities in HDC. The encoder performs hypervector binding using an innovative in-memory read logic method, while the AM search employs dot product operations in-memory. The authors demonstrate a robust analog in-memory processing method without major degradation in classification accuracy, a feat attributed to HDC's resilience to errors and variability.

For AM search, the paper innovates by reducing deterministic spatial variability in crystalline PCM states through coarse-grained randomization. The architecture partitions prototype hypervectors into segments, distributing these across crossbar arrays to mitigate spatial non-uniformity.

In encoding functionalities, the paper introduces a 2-minterm approximation for nn-gram hypervectors, drastically reducing computational overhead while retaining an acceptable accuracy level. Such an approximation enables efficient hypervector operations suitable for hardware implementation.

Results and Implications

The experimental results showcase the potential of this architecture in achieving >6×>6\times energy savings compared to all-CMOS implementations. By mapping core HDC operations in-or-near-memory, the proposed system achieves significant improvements in energy efficiency and area reductions. The authors effectively leverage PCM technology's inherent suitability for non-volatile storage and binary operations, underlining the potential scalability of the architecture to different learning tasks and configurations.

Future Directions

The presented research paves the way for broader applications of in-memory HDC, potentially extending beyond learning and classification to cognitive tasks, data compression, and retrieval. Future work will likely focus on enhancing peripheral circuit energy efficiency and exploring nanoscale PCM technologies' full capabilities. Furthermore, expanding this approach to other memristive materials, such as those based on ionic drift and magnetoresistance, could further enhance the versatility of in-memory HDC systems.

In conclusion, this paper offers a comprehensive paper on integrating HDC with in-memory computing, demonstrating feasible pathways for efficient, scalable machine intelligence hardware. The work not only consolidates PCM-based in-memory processing's role in specialized AI tasks but also presents a compelling case for further innovations in this domain.