Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Linear Codes for Hyperdimensional Computing (2403.03278v1)

Published 5 Mar 2024 in cs.IT, cs.NE, and math.IT

Abstract: Hyperdimensional Computing (HDC) is an emerging computational paradigm for representing compositional information as high-dimensional vectors, and has a promising potential in applications ranging from machine learning to neuromorphic computing. One of the long-standing challenges in HDC is factoring a compositional representation to its constituent factors, also known as the recovery problem. In this paper we take a novel approach to solve the recovery problem, and propose the use of random linear codes. These codes are subspaces over the Boolean field, and are a well-studied topic in information theory with various applications in digital communication. We begin by showing that hyperdimensional encoding using random linear codes retains favorable properties of the prevalent (ordinary) random codes, and hence HD representations using the two methods have comparable information storage capabilities. We proceed to show that random linear codes offer a rich subcode structure that can be used to form key-value stores, which encapsulate most use cases of HDC. Most importantly, we show that under the framework we develop, random linear codes admit simple recovery algorithms to factor (either bundled or bound) compositional representations. The former relies on constructing certain linear equation systems over the Boolean field, the solution to which reduces the search space dramatically and strictly outperforms exhaustive search in many cases. The latter employs the subspace structure of these codes to achieve provably correct factorization. Both methods are strictly faster than the state-of-the-art resonator networks, often by an order of magnitude. We implemented our techniques in Python using a benchmark software library, and demonstrated promising experimental results.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. P. Kanerva, “Computing with 10,000-bit words,” in 2014 52nd annual Allerton conference on communication, control, and computing (Allerton), pp. 304–310, IEEE, 2014.
  2. CSLI Publications Stanford, 2003.
  3. R. W. Gayler, “Vector symbolic architectures answer jackendoff’s challenges for cognitive neuroscience,” arXiv preprint cs/0412059, 2004.
  4. B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman, “Building machines that learn and think like people,” Behavioral and brain sciences, vol. 40, p. e253, 2017.
  5. T. Yu, Y. Zhang, Z. Zhang, and C. M. De Sa, “Understanding hyperdimensional computing for parallel single-pass learning,” Advances in Neural Information Processing Systems, vol. 35, pp. 1157–1169, 2022.
  6. M. Imani, C. Huang, D. Kong, and T. Rosing, “Hierarchical hyperdimensional computing for energy efficient classification,” in Proceedings of the 55th Annual Design Automation Conference, pp. 1–6, 2018.
  7. A. Thomas, B. Khaleghi, G. K. Jha, N. Himayat, R. Iyer, N. Jain, and T. Rosing, “Streaming encoding algorithms for scalable hyperdimensional computing,” arXiv preprint arXiv:2209.09868, 2022.
  8. A. Rahimi, P. Kanerva, and J. M. Rabaey, “A robust and energy-efficient classifier using brain-inspired hyperdimensional computing,” in Proceedings of the 2016 international symposium on low power electronics and design, pp. 64–69, 2016.
  9. M. Imani, T. Nassar, A. Rahimi, and T. Rosing, “Hdna: Energy-efficient dna sequencing using hyperdimensional computing,” in 2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 271–274, IEEE, 2018.
  10. M. Imani, D. Kong, A. Rahimi, and T. Rosing, “Voicehd: Hyperdimensional computing for efficient speech recognition,” in 2017 IEEE international conference on rebooting computing (ICRC), pp. 1–8, IEEE, 2017.
  11. M. Schmuck, L. Benini, and A. Rahimi, “Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 15, no. 4, pp. 1–25, 2019.
  12. P. Neubert, S. Schubert, and P. Protzel, “An introduction to hyperdimensional computing for robotics,” KI-Künstliche Intelligenz, vol. 33, pp. 319–330, 2019.
  13. A. Thomas, S. Dasgupta, and T. Rosing, “A theoretical perspective on hyperdimensional computing,” Journal of Artificial Intelligence Research, vol. 72, pp. 215–249, 2021.
  14. K. L. Clarkson, S. Ubaru, and E. Yang, “Capacity analysis of vector symbolic architectures,” arXiv preprint arXiv:2301.10352, 2023.
  15. D. Kleyko, D. Rachkovskij, E. Osipov, and A. Rahimi, “A survey on hyperdimensional computing aka vector symbolic architectures, part ii: Applications, cognitive models, and challenges,” ACM Computing Surveys, vol. 55, no. 9, pp. 1–52, 2023.
  16. S. Aygun, M. S. Moghadam, M. H. Najafi, and M. Imani, “Learning from hypervectors: A survey on hypervector encoding,” arXiv preprint arXiv:2308.00685, 2023.
  17. C.-Y. Chang, Y.-C. Chuang, C.-T. Huang, and A.-Y. Wu, “Recent progress and development of hyperdimensional computing (hdc) for edge intelligence,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2023.
  18. S. I. Gallant and T. W. Okaywe, “Representing objects, relations, and sequences,” Neural computation, vol. 25, no. 8, pp. 2038–2078, 2013.
  19. T. A. Plate, “Holographic reduced representations,” IEEE Transactions on Neural networks, vol. 6, no. 3, pp. 623–641, 1995.
  20. P. C. Kainen and V. Kurkova, “Quasiorthogonal dimension of euclidean spaces,” Applied mathematics letters, vol. 6, no. 3, pp. 7–10, 1993.
  21. E. P. Frady, S. J. Kent, B. A. Olshausen, and F. T. Sommer, “Resonator networks, 1: An efficient solution for factoring high-dimensional, distributed representations of data structures,” Neural computation, vol. 32, no. 12, pp. 2311–2331, 2020.
  22. D. Kleyko, C. Bybee, P.-C. Huang, C. J. Kymn, B. A. Olshausen, E. P. Frady, and F. T. Sommer, “Efficient decoding of compositional structure in holistic representations,” Neural Computation, vol. 35, no. 7, pp. 1159–1186, 2023.
  23. R. M. Roth, “Introduction to coding theory,” IET Communications, vol. 47, no. 18-19, p. 4, 2006.
  24. M. Heddes, I. Nunes, P. Vergés, D. Kleyko, D. Abraham, T. Givargis, A. Nicolau, and A. Veidenbaum, “Torchhd: An open source python library to support research on hyperdimensional computing and vector symbolic architectures,” Journal of Machine Learning Research, vol. 24, no. 255, pp. 1–10, 2023.
  25. S. J. Kent, E. P. Frady, F. T. Sommer, and B. A. Olshausen, “Resonator networks, 2: Factorization performance and capacity compared to optimization-based methods,” Neural computation, vol. 32, no. 12, pp. 2332–2388, 2020.
  26. M. Hersche, Z. Opala, G. Karunaratne, A. Sebastian, and A. Rahimi, “Decoding superpositions of bound symbols represented by distributed representations,” 2023.
  27. J. Naor and M. Naor, “Small-bias probability spaces: Efficient constructions and applications,” in Proceedings of the twenty-second annual ACM symposium on Theory of computing, pp. 213–223, 1990.
  28. N. Alon, O. Goldreich, J. Håstad, and R. Peralta, “Simple constructions of almost k-wise independent random variables,” Random Structures & Algorithms, vol. 3, no. 3, pp. 289–304, 1992.
  29. A. Ben-Aroya and A. Ta-Shma, “Constructing small-bias sets from algebraic-geometric codes,” in 2009 50th Annual IEEE Symposium on Foundations of Computer Science, pp. 191–197, IEEE, 2009.
  30. A. Ta-Shma, “Explicit, almost optimal, epsilon-balanced codes,” in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 238–251, 2017.
  31. E. Berlekamp, R. McEliece, and H. Van Tilborg, “On the inherent intractability of certain coding problems (corresp.),” IEEE Transactions on Information Theory, vol. 24, no. 3, pp. 384–386, 1978.
  32. A. Becker, A. Joux, A. May, and A. Meurer, “Decoding random binary linear codes in 2 n/20: How 1+ 1= 0 improves information set decoding,” in Advances in Cryptology–EUROCRYPT 2012: 31st Annual International Conference on the Theory and Applications of Cryptographic Techniques, Cambridge, UK, April 15-19, 2012. Proceedings 31, pp. 520–536, Springer, 2012.
  33. E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using deep learning,” in 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 341–346, IEEE, 2016.
  34. J. Chen, A. Dholakia, E. Eleftheriou, M. P. Fossorier, and X.-Y. Hu, “Reduced-complexity decoding of ldpc codes,” IEEE transactions on communications, vol. 53, no. 8, pp. 1288–1299, 2005.
  35. E. P. Frady, D. Kleyko, and F. T. Sommer, “A theory of sequence indexing and working memory in recurrent neural networks,” Neural Computation, vol. 30, no. 6, pp. 1449–1513, 2018.
  36. B. L. Hughes and A. B. Cooper, “Nearly optimal multiuser codes for the binary adder channel,” IEEE Transactions on Information Theory, vol. 42, no. 2, pp. 387–398, 1996.
  37. G. Liva and Y. Polyanskiy, “On coding techniques for unsourced multiple-access,” in 2021 55th Asilomar Conference on Signals, Systems, and Computers, pp. 1507–1514, IEEE, 2021.
  38. J. Langenegger, G. Karunaratne, M. Hersche, L. Benini, A. Sebastian, and A. Rahimi, “In-memory factorization of holographic perceptual representations,” Nature Nanotechnology, vol. 18, no. 5, pp. 479–485, 2023.
  39. M. Hostetter, “Galois: A performant NumPy extension for Galois fields,” 11 2020.
  40. A. Ganesan, H. Gao, S. Gandhi, E. Raff, T. Oates, J. Holt, and M. McLean, “Learning with holographic reduced representations,” Advances in Neural Information Processing Systems, vol. 34, pp. 25606–25620, 2021.
  41. D. Kleyko, A. Rahimi, D. A. Rachkovskij, E. Osipov, and J. M. Rabaey, “Classification and recall with binary hyperdimensional computing: Tradeoffs in choice of density and mapping characteristics,” IEEE transactions on neural networks and learning systems, vol. 29, no. 12, pp. 5880–5898, 2018.
  42. M. C. Grignetti, “A note on the entropy of words in printed english,” Information and Control, vol. 7, no. 3, pp. 304–306, 1964.
  43. T. M. Cover, Elements of information theory. John Wiley & Sons, 1999.
  44. J. Ziv and A. Lempel, “A universal algorithm for sequential data compression,” IEEE Transactions on information theory, vol. 23, no. 3, pp. 337–343, 1977.
Citations (2)

Summary

We haven't generated a summary for this paper yet.