Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Torchhd: An Open Source Python Library to Support Research on Hyperdimensional Computing and Vector Symbolic Architectures (2205.09208v3)

Published 18 May 2022 in cs.LG

Abstract: Hyperdimensional computing (HD), also known as vector symbolic architectures (VSA), is a framework for computing with distributed representations by exploiting properties of random high-dimensional vector spaces. The commitment of the scientific community to aggregate and disseminate research in this particularly multidisciplinary area has been fundamental for its advancement. Joining these efforts, we present Torchhd, a high-performance open source Python library for HD/VSA. Torchhd seeks to make HD/VSA more accessible and serves as an efficient foundation for further research and application development. The easy-to-use library builds on top of PyTorch and features state-of-the-art HD/VSA functionality, clear documentation, and implementation examples from well-known publications. Comparing publicly available code with their corresponding Torchhd implementation shows that experiments can run up to 100x faster. Torchhd is available at: https://github.com/hyperdimensional-computing/torchhd.

Citations (19)

Summary

  • The paper presents Torchhd, a high-performance Python library that streamlines hyperdimensional computing and VSA research through a modular design.
  • The paper details robust functionalities, including hypervector generation and efficient GPU acceleration achieving up to 104x speed improvements.
  • The paper highlights Torchhd's potential to advance neuro-symbolic hybrid approaches and foster integration of HD/VSA methods with mainstream AI.

Torchhd: An Open Source Python Library for Hyperdimensional Computing

This paper presents Torchhd, a Python library designed to bolster research and application development in Hyperdimensional Computing (HD) and Vector Symbolic Architectures (VSA). Built atop PyTorch, Torchhd delivers high-performance execution and supports modular, state-of-the-art HD/VSA operations, making it a significant tool for both novice and experienced researchers in cognitive computing, machine learning, and neuroscience domains.

Overview of Hyperdimensional Computing and Vector Symbolic Architectures

HD/VSA is a computational framework that employs high-dimensional random vectors, or hypervectors, to construct distributed representations capable of handling compositional structures and performing analogy-based reasoning. Although the foundational principles of HD/VSA have been established for an extended period, recent advances have piqued interest due to their potential in complementing traditional artificial neural networks, enabling neuro-symbolic hybrid approaches.

Design and Functionality of Torchhd

Torchhd's design emphasizes accessibility and performance. It offers a versatile framework that accommodates a comprehensive array of HD/VSA primitives and applications. By maintaining ease of use, Torchhd invites new adopters while supporting advanced research into HD/VSA components and methodologies. The library's key functionalities are encapsulated in six modules:

  • Functional: Provides hypervector generation and operation, including resonator networks for hypervector factorization.
  • Embeddings: Facilitates transformations from scalars or feature vectors to hypervectors, supporting similarity-preserving transformations compatible with kernel methods.
  • Models: Implements classification models such as centroid models and learning vector quantization, alongside training strategies.
  • Memory: Implements long-term hypervector storage using methods inspired by biological neural networks and attention mechanisms.
  • Structures: Supplies data structures like hash tables and graphs, enabling the development of algorithmic processes using HD/VSA principles.
  • Datasets: Offers access to numerous datasets, ensuring compatibility with PyTorch for streamlined benchmarking and evaluation of HD/VSA methods.

The performance analysis exhibited in the paper, leveraging classification tasks on standardized computing hardware, demonstrates significant speedup—up to 104 times faster with GPU acceleration—compared with original datasets, affirming Torchhd's efficiency in processing large-scale data.

Comparative Analysis and Implications

Torchhd stands out in contrast to other software solutions like OpenHD, HDTorch, and VSA Toolbox by offering a more exhaustive set of features and greater generalizability for HD/VSA applications. The library's unique offering of autodifferentiation significantly amplifies its applicability in hybrid neuro-symbolic modeling, which can foster novel research explorations within the AI landscape.

Future Directions

The authors indicate a commitment to continuous enhancement of Torchhd by integrating additional HD/VSA models and expanding support for diverse learning paradigms such as differentiable learning and clustering. Such advancements hold potential for further bridging HD/VSA methodologies with mainstream machine learning practices, potentially fostering interdisciplinary breakthroughs in cognitive computing and AI.

Conclusion

Torchhd distinctly contributes to the HD/VSA research community by providing a comprehensive, high-performance toolkit that addresses both the foundational and applied aspects of hyperdimensional computing. Its seamless integration with the PyTorch ecosystem stands to facilitate further cross-pollination of ideas and advances across the broader AI research community.