Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Efficient Hyperdimensional Computing Using Photonics (2311.17801v2)

Published 29 Nov 2023 in cs.ET, cs.AR, and cs.LG

Abstract: Over the past few years, silicon photonics-based computing has emerged as a promising alternative to CMOS-based computing for Deep Neural Networks (DNN). Unfortunately, the non-linear operations and the high-precision requirements of DNNs make it extremely challenging to design efficient silicon photonics-based systems for DNN inference and training. Hyperdimensional Computing (HDC) is an emerging, brain-inspired machine learning technique that enjoys several advantages over existing DNNs, including being lightweight, requiring low-precision operands, and being robust to noise introduced by the nonidealities in the hardware. For HDC, computing in-memory (CiM) approaches have been widely used, as CiM reduces the data transfer cost if the operands can fit into the memory. However, inefficient multi-bit operations, high write latency, and low endurance make CiM ill-suited for HDC. On the other hand, the existing electro-photonic DNN accelerators are inefficient for HDC because they are specifically optimized for matrix multiplication in DNNs and consume a lot of power with high-precision data converters. In this paper, we argue that photonic computing and HDC complement each other better than photonic computing and DNNs, or CiM and HDC. We propose PhotoHDC, the first-ever electro-photonic accelerator for HDC training and inference, supporting the basic, record-based, and graph encoding schemes. Evaluating with popular datasets, we show that our accelerator can achieve two to five orders of magnitude lower EDP than the state-of-the-art electro-photonic DNN accelerators for implementing HDC training and inference. PhotoHDC also achieves four orders of magnitude lower energy-delay product than CiM-based accelerators for both HDC training and inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. “Genus synthesis solution.” https://www.cadence.com/en_US/home/tools/digital-design-and-signoff/synthesis/genus-synthesis-solution.html.
  2. “GF22nm FD-SOI Technology,” https://globalfoundries.com/sites/default/files/product-briefs/pb-22fdx-26-web.pdf.
  3. “Pecan street dataport,” https://dataport.cloud/.
  4. S. Akiyama, T. Baba, M. Imai, T. Akagawa, M. Takahashi, N. Hirayama, H. Takahashi, Y. Noguchi, H. Okayama, T. Horikawa, and T. Usuki, “12.5-Gb/s operation with 0.29-V·cm Vπ𝜋\piitalic_πL using silicon Mach-Zehnder modulator based-on forward-biased pin diode,” Opt. Express, 2012.
  5. A. Angelova, Y. Abu-Mostafam, and P. Perona, “Pruning training sets for learning of object categories,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1.   IEEE, 2005, pp. 494–501.
  6. D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “A public domain dataset for human activity recognition using smartphones.” in Esann, vol. 3, 2013, p. 3.
  7. V. Bangari, B. A. Marquez, H. Miller, A. N. Tait, M. A. Nahmias, T. F. de Lima, H.-T. Peng, P. R. Prucnal, and B. J. Shastri, “Digital electronics and analog photonics for convolutional neural networks (deap-cnns),” IEEE Journal of Selected Topics in Quantum Electronics, vol. 26, no. 1, pp. 1–13, 2020.
  8. H. E. Barkam, S. Yun, P. R. Genssler, Z. Zou, C.-K. Liu, H. Amrouch, and M. Imani, “Hdgim: Hyperdimensional genome sequence matching on unreliable highly scaled fefet,” in 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2023, pp. 1–6.
  9. K. M. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. J. Smola, and H.-P. Kriegel, “Protein function prediction via graph kernels,” Bioinformatics, vol. 21, no. suppl_1, pp. i47–i56, 2005.
  10. C. Demirkiran, F. Eris, G. Wang, J. Elmhurst, N. Moore, N. C. Harris, A. Basumallik, V. J. Reddi, A. Joshi, and D. Bunandar, “An electro-photonic system for accelerating deep neural networks,” arXiv preprint arXiv:2109.01126, 2021.
  11. P. D. Dobson and A. J. Doig, “Distinguishing enzyme structures from non-enzymes without alignments,” Journal of molecular biology, vol. 330, no. 4, pp. 771–783, 2003.
  12. D. Dua and C. Graff, “UCI machine learning repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml
  13. P. Edinger, A. Y. Takabayashi, C. Errando-Herranz, U. Khan, H. Sattari, P. Verheyen, W. Bogaerts, N. Quack, and K. B. Gylfason, “Silicon photonic microelectromechanical phase shifters for scalable programmable photonics,” Optics Letters, 2021.
  14. L. Ge and K. K. Parhi, “Classification using hyperdimensional computing: A review,” IEEE Circuits and Systems Magazine, 2020.
  15. M. Guo, J. Mao, S.-W. Sin, H. Wei, and R. P. Martins, “A 29mw 5gs/s time-interleaved sar adc achieving 48.5 db sndr with fully-digital timing-skew calibration based on digital-mixing,” in 2019 Symposium on VLSI Circuits.   IEEE, 2019, pp. C76–C77.
  16. Y. Guo, M. Imani, J. Kang, S. Salamat, J. Morris, B. Aksanli, Y. Kim, and T. Rosing, “Hyperrec: Efficient recommender systems with hyperdimensional computing,” in Proceedings of the 26th Asia and South Pacific Design Automation Conference, 2021, pp. 384–389.
  17. N. C. Harris, “Photonics processor architecture,” Oct. 28 2021, uS Patent App. 17/240,506.
  18. N. C. Harris, Y. Ma, J. Mower, T. Baehr-Jones, D. Englund, M. Hochberg, and C. Galland, “Efficient, compact and low loss thermo-optic phase shifter in silicon,” Optics express, 2014.
  19. A. Hernandez-Cano, N. Matsumoto, E. Ping, and M. Imani, “Onlinehd: Robust, efficient, and single-pass online learning using hyperdimensional system,” in IEEE DATE, 2021.
  20. H.-Y. Huang, X.-Y. Chen, and T.-H. Kuo, “A 10-gs/s nrz/mixing dac with switching-glitch compensation achieving sfdr¿ 64/50 dbc over the first/second nyquist zone,” IEEE Journal of Solid-State Circuits, vol. 56, no. 10, pp. 3145–3156, 2021.
  21. Z. Huang, C. Li, D. Liang, K. Yu, C. Santori, M. Fiorentino, W. Sorin, S. Palermo, and R. G. Beausoleil, “25 gbps low-voltage waveguide si–ge avalanche photodiode,” Optica, vol. 3, no. 8, pp. 793–798, 2016.
  22. M. Imani, S. Bosch, M. Javaheripi, B. Rouhani, X. Wu, F. Koushanfar, and T. Rosing, “Semihd: Semi-supervised learning using hyperdimensional computing,” in 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).   IEEE, 2019, pp. 1–8.
  23. M. Imani, C. Huang, D. Kong, and T. Rosing, “Hierarchical hyperdimensional computing for energy efficient classification,” in Proceedings of the 55th Annual Design Automation Conference, 2018, pp. 1–6.
  24. M. Imani, D. Kong, A. Rahimi, and T. Rosing, “Voicehd: Hyperdimensional computing for efficient speech recognition,” in 2017 IEEE international conference on rebooting computing (ICRC).   IEEE, 2017, pp. 1–8.
  25. M. Imani, J. Morris, S. Bosch, H. Shu, G. De Micheli, and T. Rosing, “Adapthd: Adaptive efficient training for brain-inspired hyperdimensional computing,” in 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS).   IEEE, 2019, pp. 1–4.
  26. M. Imani, X. Yin, J. Messerly, S. Gupta, M. Niemier, X. S. Hu, and T. Rosing, “Searchd: A memory-centric hyperdimensional computing with stochastic training,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 39, no. 10, pp. 2422–2433, 2019.
  27. M. Imani, A. Zakeri, H. Chen, T. Kim, P. Poduval, H. Lee, Y. Kim, E. Sadredini, and F. Imani, “Neural computation for robust and holographic face detection,” in DAC, 2022.
  28. J. Kang, B. Khaleghi, T. Rosing, and Y. Kim, “Openhd: A gpu-powered framework for hyperdimensional computing,” IEEE Transactions on Computers, vol. 71, no. 11, pp. 2753–2765, 2022.
  29. G. Karunaratne, M. Le Gallo, G. Cherubini, L. Benini, A. Rahimi, and A. Sebastian, “In-memory hyperdimensional computing,” Nature Electronics, 2020.
  30. A. Kazemi, F. Müller, M. M. Sharifi, H. Errahmouni, G. Gerlach, T. Kämpfe, M. Imani, X. S. Hu, and M. Niemier, “Achieving software-equivalent accuracy for hyperdimensional computing with ferroelectric-based in-memory computing,” Scientific reports, 2022.
  31. A. Kazemi, M. M. Sharifi, Z. Zou, M. Niemier, X. S. Hu, and M. Imani, “Mimhd: Accurate and efficient hyperdimensional inference using multi-bit in-memory computing,” in IEEE/ACM ISLPED, 2021.
  32. B. Khaleghi, J. Kang, H. Xu, J. Morris, and T. Rosing, “Generic: highly efficient learning engine on edge using hyperdimensional computing,” in IEEE DAC, 2022.
  33. B. Khaleghi, H. Xu, J. Morris, and T. Š. Rosing, “tiny-hd: Ultra-efficient hyperdimensional computing engine for iot applications,” in 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2021, pp. 408–413.
  34. J. Kim, H. Lee, M. Imani, and Y. Kim, “Efficient hyperdimensional learning with trainable, quantizable, and holistic data representation,” in 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2023, pp. 1–6.
  35. Y. Kim, J. Kim, and M. Imani, “Cascadehd: Efficient many-class learning framework using hyperdimensional computing,” in 2021 58th ACM/IEEE Design Automation Conference (DAC).   IEEE, 2021, pp. 775–780.
  36. H. Li, W.-C. Chen, A. Levy, C.-H. Wang, H. Wang, P.-H. Chen, W. Wan, W.-S. Khwa, H. Chuang, Y.-D. Chih, M.-F. Chang, H.-S. P. Wong, and P. Raina, “Sapiens: A 64-kb rram-based non-volatile associative memory for one-shot learning and inference at the edge,” IEEE Transactions on Electron Devices, vol. 68, no. 12, pp. 6637–6643, 2021.
  37. C.-K. Liu, H. Chen, M. Imani, K. Ni, A. Kazemi, A. F. Laguna, M. Niemier, X. S. Hu, L. Zhao, C. Zhuo, and X. Yin, “Cosime: Fefet based associative memory for in-memory cosine similarity search,” in IEEE/ACM ICCAD, 2022.
  38. W. Liu, W. Liu, Y. Ye, Q. Lou, Y. Xie, and L. Jiang, “Holylight: A nanophotonic accelerator for deep learning in data centers,” in 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2019, pp. 1483–1488.
  39. A. Lu, X. Peng, W. Li, H. Jiang, and S. Yu, “Neurosim simulator for compute-in-memory hardware accelerator: Validation and benchmark,” Frontiers in artificial intelligence, vol. 4, p. 659060, 2021.
  40. G. Mourou, B. Brocklesby, T. Tajima, and J. Limpert, “The future is fibre accelerators,” Nature Photonics, 2013.
  41. P. Neubert, S. Schubert, and P. Protzel, “An introduction to hyperdimensional computing for robotics,” KI-Künstliche Intelligenz, 2019.
  42. K. Ni, X. Yin, A. F. Laguna, S. Joshi, S. Dünkel, M. Trentzsch, J. Müller, S. Beyer, M. Niemier, X. S. Hu, and S. Datta, “Ferroelectric ternary content-addressable memory for one-shot learning,” Nature Electronics, vol. 2, no. 11, pp. 521–529, 2019.
  43. I. Nunes, M. Heddes, T. Givargis, A. Nicolau, and A. Veidenbaum, “Graphhd: Efficient graph classification using hyperdimensional computing,” in 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE).   IEEE, 2022, pp. 1485–1490.
  44. X. Peng, S. Huang, H. Jiang, A. Lu, and S. Yu, “Dnn+ neurosim v2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training,” IEEE TCAD, 2020.
  45. X. Peng, S. Huang, Y. Luo, X. Sun, and S. Yu, “Dnn+ neurosim: An end-to-end benchmarking framework for compute-in-memory accelerators with versatile device technologies,” in 2019 IEEE international electron devices meeting (IEDM).   IEEE, 2019, pp. 32–5.
  46. P. Poduval, H. Alimohamadi, A. Zakeri, F. Imani, M. Najafi, T. Givargis, and M. Imani, “Graphd: Graph-based hyperdimensional memorization for brain-like cognitive learning,” Front. Neurosci. 16: 757125. doi: 10.3389/fnins, 2022.
  47. M. Poot and H. X. Tang, “Broadband nanoelectromechanical phase shifting of light on a chip,” Applied Physics Letters, 2014.
  48. A. Rahimi, S. Datta, D. Kleyko, E. P. Frady, B. Olshausen, P. Kanerva, and J. M. Rabaey, “High-dimensional computing as a nanoscalable paradigm,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 64, no. 9, pp. 2508–2521, 2017.
  49. M. Rakowski, Y. Ban, P. De Heyn, N. Pantano, B. Snyder, S. Balakrishnan, S. Van Huylenbroeck, L. Bogaerts, C. Demeurisse, F. Inoue et al., “Hybrid 14nm finfet-silicon photonics technology for low-power tb/s/mm 2 optical i/o,” in 2018 IEEE Symposium on VLSI Technology.   IEEE, 2018, pp. 221–222.
  50. A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitoring,” in 2012 16th international symposium on wearable computers.   IEEE, 2012, pp. 108–109.
  51. S. Salamat, M. Imani, B. Khaleghi, and T. Rosing, “F5-hd: Fast flexible fpga-based framework for refreshing hyperdimensional computing,” in ACM FPGA, 2019.
  52. S. Salamat, M. Imani, and T. Rosing, “Accelerating hyperdimensional computing on fpgas by exploiting computational reuse,” IEEE Transactions on Computers, vol. 69, no. 8, pp. 1159–1171, 2020.
  53. K. Shiflett, A. Karanth, R. Bunescu, and A. Louri, “Albireo: Energy-efficient acceleration of convolutional neural networks via silicon photonics,” in ACM/IEEE ISCA, 2021.
  54. K. Shiflett, D. Wright, A. Karanth, and A. Louri, “Pixel: Photonic neural network accelerator,” in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA).   IEEE, 2020, pp. 474–487.
  55. C. Sun, M. T. Wade, Y. Lee, J. S. Orcutt, L. Alloatti, M. S. Georgas, A. S. Waterman, J. M. Shainline, R. R. Avizienis, S. Lin, B. R. Moss, R. Kumar, F. Pavanello, A. Atabaki, H. Cook, A. J. Ou, J. C. Leu, Y.-H. Chen, K. Asanović1v, R. J. Ram, M. A. Popović3, and V. M. Stojanović1, “Single-chip microprocessor that communicates directly using light,” Nature, vol. 528, no. 7583, pp. 534–538, 2015.
  56. F. Sunny, A. Mirza, M. Nikdast, and S. Pasricha, “Crosslight: A cross-layer optimized silicon photonic neural network accelerator,” in 2021 58th ACM/IEEE Design Automation Conference (DAC).   IEEE, 2021, pp. 1069–1074.
  57. F. P. Sunny, E. Taheri, M. Nikdast, and S. Pasricha, “A survey on silicon photonics for deep learning,” ACM Journal of Emerging Technologies in Computing System, vol. 17, no. 4, pp. 1–57, 2021.
  58. A. Thomas, S. Dasgupta, and T. Rosing, “Theoretical foundations of hyperdimensional computing,” Journal of Artificial Intelligence Research, 2021.
  59. C. Xu, D. Niu, N. Muralimanohar, R. Balasubramonian, T. Zhang, S. Yu, and Y. Xie, “Overcoming the challenges of crossbar resistive memory architectures,” in 2015 IEEE 21st international symposium on high performance computer architecture (HPCA).   IEEE, 2015, pp. 476–488.
  60. X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, “11 tops photonic convolutional accelerator for optical neural networks,” Nature, vol. 589, no. 7840, pp. 44–51, 2021.
  61. J.-H. Yoon, M. Chang, W.-S. Khwa, Y.-D. Chih, M.-F. Chang, and A. Raychowdhury, “A 40-nm 118.44-tops/w voltage-sensing compute-in-memory rram macro with write verification and multi-bit encoding,” IEEE Journal of Solid-State Circuits, vol. 57, no. 3, pp. 845–857, 2022.
  62. H. Zhou, J. Dong, J. Cheng, W. Dong, C. Huang, Y. Shen, Q. Zhang, M. Gu, C. Qian, H. Chen, Z. Ruan, and X. Zhang, “Photonic matrix multiplication lights up photonic accelerator and beyond,” Light: Science & Applications, 2022.
  63. Z. Zou, H. Chen, P. Poduval, Y. Kim, M. Imani, E. Sadredini, R. Cammarota, and M. Imani, “Biohd: an efficient genome sequence search platform using hyperdimensional memorization,” in ACM/IEEE ISCA, 2022.
  64. Z. Zou, Y. Kim, F. Imani, H. Alimohamadi, R. Cammarota, and M. Imani, “Scalable edge-based hyperdimensional learning system with brain-like neural adaptation,” in ACM SC, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Farbin Fayza (2 papers)
  2. Cansu Demirkiran (6 papers)
  3. Hanning Chen (24 papers)
  4. Che-Kai Liu (10 papers)
  5. Avi Mohan (4 papers)
  6. Hamza Errahmouni (1 paper)
  7. Sanggeon Yun (17 papers)
  8. Mohsen Imani (63 papers)
  9. David Zhang (83 papers)
  10. Darius Bunandar (24 papers)
  11. Ajay Joshi (25 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com