Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient and accurate neural field reconstruction using resistive memory (2404.09613v1)

Published 15 Apr 2024 in cs.ET, cs.AI, and cs.AR

Abstract: Human beings construct perception of space by integrating sparse observations into massively interconnected synapses and neurons, offering a superior parallelism and efficiency. Replicating this capability in AI finds wide applications in medical imaging, AR/VR, and embodied AI, where input data is often sparse and computing resources are limited. However, traditional signal reconstruction methods on digital computers face both software and hardware challenges. On the software front, difficulties arise from storage inefficiencies in conventional explicit signal representation. Hardware obstacles include the von Neumann bottleneck, which limits data transfer between the CPU and memory, and the limitations of CMOS circuits in supporting parallel processing. We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs. Software-wise, we employ neural field to implicitly represent signals via neural networks, which is further compressed using low-rank decomposition and structured pruning. Hardware-wise, we design a resistive memory-based computing-in-memory (CIM) platform, featuring a Gaussian Encoder (GE) and an MLP Processing Engine (PE). The GE harnesses the intrinsic stochasticity of resistive memory for efficient input encoding, while the PE achieves precise weight mapping through a Hardware-Aware Quantization (HAQ) circuit. We demonstrate the system's efficacy on a 40nm 256Kb resistive memory-based in-memory computing macro, achieving huge energy efficiency and parallelism improvements without compromising reconstruction quality in tasks like 3D CT sparse reconstruction, novel view synthesis, and novel view synthesis for dynamic scenes. This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Complexity and coherency: integrating information in the brain. \JournalTitleTrends in cognitive sciences 2, 474–484 (1998).
  2. Shen, H. et al. Missing information reconstruction of remote sensing data: A technical review. \JournalTitleIEEE Geoscience and Remote Sensing Magazine 3, 61–85 (2015).
  3. Recovery of continuous 3d refractive index maps from discrete intensity-only measurements using neural fields. \JournalTitleNature Machine Intelligence 4, 781–791 (2022).
  4. Mildenhall, B. et al. Nerf: Representing scenes as neural radiance fields for view synthesis. \JournalTitleCommunications of the ACM 65, 99–106 (2021).
  5. Embodied neuromorphic intelligence. \JournalTitleNature communications 13, 1024 (2022).
  6. Santos, J. E. et al. Development of the senseiver for efficient field reconstruction from sparse observations. \JournalTitleNature Machine Intelligence 5, 1317–1325 (2023).
  7. Digital representations of speech signals. \JournalTitleProceedings of the IEEE 63, 662–677 (1975).
  8. Digital image compression techniques, vol. 7 (SPIE press, 1991).
  9. Wu, Z. et al. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1912–1920 (2015).
  10. Spectral compression of mesh geometry. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 279–286 (2000).
  11. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, 652–660 (2017).
  12. Computation offloading toward edge computing. \JournalTitleProceedings of the IEEE 107, 1584–1607 (2019).
  13. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. \JournalTitlearXiv preprint arXiv:1510.00149 (2015).
  14. Horowitz, M. 1.1 computing’s energy problem (and what we can do about it). In 2014 IEEE international solid-state circuits conference digest of technical papers (ISSCC), 10–14 (IEEE, 2014).
  15. The future of electronics based on memristive systems. \JournalTitleNature electronics 1, 22–29 (2018).
  16. Memory leads the way to better computing. \JournalTitleNature nanotechnology 10, 191–194 (2015).
  17. A survey of accelerator architectures for deep neural networks. \JournalTitleEngineering 6, 264–274 (2020).
  18. Implicit neural representations with periodic activation functions. \JournalTitleAdvances in neural information processing systems 33, 7462–7473 (2020).
  19. Hinton, G. How to represent part-whole hierarchies in a neural network. \JournalTitleNeural Computation 35, 413–452 (2023).
  20. Speeding up convolutional neural networks with low rank expansions. \JournalTitlearXiv preprint arXiv:1405.3866 (2014).
  21. Depgraph: Towards any structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16091–16101 (2023).
  22. Ramsaran, A. I. et al. A shift in the mechanisms controlling hippocampal engram formation during brain maturation. \JournalTitleScience 380, 543–551 (2023).
  23. Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. \JournalTitleNature 558, 60–67 (2018).
  24. Ambrogio, S. et al. An analog-ai chip for energy-efficient speech recognition and transcription. \JournalTitleNature 620, 768–775 (2023).
  25. Wan, W. et al. A compute-in-memory chip based on resistive random-access memory. \JournalTitleNature 608, 504–512 (2022).
  26. Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. \JournalTitleNature materials 16, 101–108 (2017).
  27. Zhang, W. et al. Edge learning using a fully integrated neuro-inspired memristor chip. \JournalTitleScience 381, 1205–1211 (2023).
  28. Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. \JournalTitleNature 577, 641–646 (2020).
  29. Memristive crossbar arrays for brain-inspired computing. \JournalTitleNature materials 18, 309–323 (2019).
  30. Memory devices and applications for in-memory computing. \JournalTitleNature nanotechnology 15, 529–544 (2020).
  31. Pipelayer: A pipelined reram-based accelerator for deep learning. In 2017 IEEE international symposium on high performance computer architecture (HPCA), 541–552 (IEEE, 2017).
  32. In-memory computing with resistive switching devices. \JournalTitleNature electronics 1, 333–343 (2018).
  33. Rao, M. et al. Thousands of conductance levels in memristors integrated on cmos. \JournalTitleNature 615, 823–829 (2023).
  34. Activity-difference training of deep neural networks using memristor crossbars. \JournalTitleNature Electronics 6, 45–51 (2023).
  35. Dynamical memristors for higher-complexity neuromorphic computing. \JournalTitleNature Reviews Materials 7, 575–591 (2022).
  36. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. \JournalTitleNature 521, 61–64 (2015).
  37. One-step regression and classification with cross-point resistive memory arrays. \JournalTitleScience advances 6, eaay2378 (2020).
  38. Yuan, R. et al. A neuromorphic physiological signal processing system based on vo2 memristor for next-generation human-machine interface. \JournalTitleNature Communications 14, 3695 (2023).
  39. Cai, F. et al. Power-efficient combinatorial optimization using intrinsic noise in memristor hopfield neural networks. \JournalTitleNature Electronics 3, 409–418 (2020).
  40. Wang, S. et al. Echo state graph neural networks with analogue random resistive memory arrays. \JournalTitleNature Machine Intelligence 5, 104–113 (2023).
  41. Yang, Y. et al. Observation of conducting filament growth in nanoscale resistive memories. \JournalTitleNature communications 3, 732 (2012).
  42. Post training 4-bit quantization of convolutional networks for rapid-deployment. \JournalTitleAdvances in Neural Information Processing Systems 32 (2019).
  43. Jacob, B. et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018).
  44. Neural tangent kernel: Convergence and generalization in neural networks. \JournalTitleAdvances in neural information processing systems 31 (2018).
  45. Vaswani, A. et al. Attention is all you need. \JournalTitleAdvances in neural information processing systems 30 (2017).
  46. Tancik, M. et al. Fourier features let networks learn high frequency functions in low dimensional domains. \JournalTitleAdvances in Neural Information Processing Systems 33, 7537–7547 (2020).
  47. Volder, J. E. The cordic trigonometric computing technique. \JournalTitleIRE Transactions on electronic computers 330–334 (1959).
  48. Prior image constrained compressed sensing (piccs): a method to accurately reconstruct dynamic ct images from highly undersampled projection data sets. \JournalTitleMedical physics 35, 660–663 (2008).
  49. Accurate image reconstruction from few-views and limited-angle data in divergent-beam ct. \JournalTitleJournal of X-ray Science and Technology 14, 119–139 (2006).
  50. Nerp: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction. \JournalTitleIEEE Transactions on Neural Networks and Learning Systems (2022).
  51. Eslami, S. A. et al. Neural scene representation and rendering. \JournalTitleScience 360, 1204–1210 (2018).
  52. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10318–10327 (2021).
  53. Image quality metrics: Psnr vs. ssim. In 2010 20th international conference on pattern recognition, 2366–2369 (IEEE, 2010).
  54. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, 586–595 (2018).
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (19)
  1. Yifei Yu (31 papers)
  2. Shaocong Wang (10 papers)
  3. Woyu Zhang (6 papers)
  4. Xinyuan Zhang (60 papers)
  5. Xiuzhe Wu (4 papers)
  6. Yangu He (9 papers)
  7. Jichang Yang (5 papers)
  8. Yue Zhang (620 papers)
  9. Ning Lin (25 papers)
  10. Bo Wang (823 papers)
  11. Xi Chen (1036 papers)
  12. Songqi Wang (8 papers)
  13. Xumeng Zhang (10 papers)
  14. Xiaojuan Qi (133 papers)
  15. Zhongrui Wang (32 papers)
  16. Dashan Shang (16 papers)
  17. Qi Liu (485 papers)
  18. Kwang-Ting Cheng (96 papers)
  19. Ming Liu (421 papers)

Summary

We haven't generated a summary for this paper yet.