Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 74 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Revisiting Main Memory-Based Covert and Side Channel Attacks in the Context of Processing-in-Memory (2404.11284v4)

Published 17 Apr 2024 in cs.CR and cs.AR

Abstract: We introduce IMPACT, a set of high-throughput main memory-based timing attacks that leverage characteristics of processing-in-memory (PiM) architectures to establish covert and side channels. IMPACT enables high-throughput communication and private information leakage by exploiting the shared DRAM row buffer. To achieve high throughput, IMPACT (i) eliminates expensive cache bypassing steps required by processor-centric memory-based timing attacks and (ii) leverages the intrinsic parallelism of PiM operations. We showcase two applications of IMPACT. First, we build two covert channels that leverage different PiM approaches (i.e., processing-near-memory and processing-using-memory) to establish high-throughput covert communication channels. Our covert channels achieve 8.2 Mb/s and 14.8 Mb/s communication throughput, respectively, which is 3.6x and 6.5x higher than the state-of-the-art main memory-based covert channel. Second, we showcase a side-channel attack that leaks private information of concurrently-running victim applications with a low error rate. Our source-code is openly and freely available at https://github.com/CMU-SAFARI/IMPACT.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (145)
  1. Onur Mutlu. Memory Scaling: A Systems Architecture Perspective. In IMW, 2013.
  2. Research Problems and Opportunities in Memory Systems. SUPERFRI, 2014.
  3. The Tail at Scale. CACM, 2013.
  4. Profiling a Warehouse-Scale Computer. In ISCA, 2015.
  5. Clearing the Clouds: A Study of Emerging Scale-Out Workloads on Modern Hardware. In ASPLOS, 2012.
  6. BigDataBench: A Big Data Benchmark Suite from Internet Services. In HPCA, 2014.
  7. Enabling Practical Processing in and Near Memory for Data-Intensive Computing. In DAC, 2019.
  8. Processing Data Where It Makes Sense: Enabling In-Memory Computation. MicPro, 2019.
  9. Onur Mutlu. Intelligent Architectures for Intelligent Machines. In VLSI-DAT, 2020.
  10. A Modern Primer on Processing in Memory. In Emerging Computing: From Devices to Systems — Looking Beyond Moore and Von Neumann. Springer, 2021.
  11. Google Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks. In ASPLOS, 2018.
  12. Reducing Data Movement Energy via Online Data Clustering and Encoding. In MICRO, 2016.
  13. Quantifying the Energy Cost of Data Movement for Emerging Smart Phone Workloads on Mobile Platforms. In IISWC, 2014.
  14. EDEN: Enabling Energy-Efficient, High-Performance Deep Neural Network Inference Using Approximate DRAM. In MICRO, 2019.
  15. Co-Architecting Controllers and DRAM to Enhance DRAM Process Scaling. In The Memory Forum, 2014.
  16. Sally A McKee. Reflections on the Memory Wall. In CF, 2004.
  17. Maurice V Wilkes. The Memory Gap and the Future of High Performance Memories. CAN, 2001.
  18. A Case for Exploiting Subarray-Level Parallelism (SALP) in DRAM. In ISCA, 2012.
  19. Hitting the Memory Wall: Implications of the Obvious. CAN, 1995.
  20. A Scalable Processing-in-Memory Accelerator for Parallel Graph Processing. In ISCA, 2015.
  21. Transparent Offloading and Mapping (TOM) Enabling Programmer-Transparent Near-Data Processing in GPU Systems. In ISCA, 2016.
  22. FIGARO: Improving System Performance via Fine-Grained In-DRAM Data Relocation and Caching. In MICRO, 2020.
  23. Harmonia: Balancing Compute and Memory Power in High-Performance GPUs. In ISCA, 2015.
  24. Architecting for Power Management: The IBM® Power7™ Approach. In HPCA, 2010.
  25. Energy Management for Commercial Servers. In Computer, 2003.
  26. Thomas Vogelsang. Understanding the Energy Consumption of Dynamic Random Access Memories. In MICRO, 2010.
  27. DAMOV: A New Methodology and Benchmark Suite for Evaluating Data Movement Bottlenecks. IEEE Access, 2021.
  28. J Thomas Pawlowski. Hybrid Memory Cube (HMC). In HCS, 2011.
  29. Gabriel H Loh. 3D-Stacked Memory Architectures for Multi-Core Processors. In ISCA, 2008.
  30. Hybrid Memory Cube Consortium et al. Hybrid Memory Cube Specification 2.1. Retrieved from micron.com, 2014.
  31. GraphPIM: Enabling Instruction-Level PIM Offloading in Graph Computing Frameworks. In HPCA, 2017.
  32. GRIM-Filter: Fast Seed Location Filtering in DNA Read Mapping using Processing-in-Memory Technologies. In APBC, 2018.
  33. TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory. In ASPLOS, 2017.
  34. Neurocube: A Programmable Digital Neuromorphic Architecture with High-Density 3D Memory. In ISCA, 2016.
  35. GenStore: A High-Performance and Energy-Efficient In-Storage Computing System for Genome Sequence Analysis. In ASPLOS, 2022.
  36. Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology: Industrial Product. In ISCA, 2021.
  37. Paul Rosenfeld. Performance Exploration of the Hybrid Memory Cube. PhD thesis, University of Maryland, 2014.
  38. SISA: Set-Centric Instruction Set Architecture for Graph Mining on Processing-in-Memory Systems. In MICRO, 2021.
  39. NAPEL: Near-Memory Computing Application Performance Prediction via Ensemble Learning. In DAC, 2019.
  40. Near-Memory Processing in Action: Accelerating Personalized Recommendation with AxDIMM. IEEE Micro, 2021.
  41. Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud. IEEE Micro, 2022.
  42. Fabrice Devaux. The true Processing In Memory accelerator. In 2019 IEEE Hot Chips 31 Symposium (HCS). IEEE Computer Society, 2019.
  43. A 1ynm 1.25V 8Gb, 16Gb/s/pin GDDR6-based Accelerator-in-Memory supporting 1TFLOPS MAC Operation and Various Activation Functions for Deep-Learning Applications. In ISSCC. IEEE, 2022.
  44. 184QPS/W 64Mb/mm23D Logic-to-DRAM Hybrid Bonding with Process-Near-Memory Engine for Recommendation System. In ISSCC. IEEE, 2022.
  45. 25.4 A 20nm 6GB Function-In-Memory DRAM, Based on HBM2 with a 1.2TFLOPS Programmable Computing Unit Using Bank-Level Parallelism, for Machine Learning Applications. In ISSCC. IEEE, 2021.
  46. Alp: Alleviating cpu-memory data movement overheads in memory-centric systems. IEEE Transactions on Emerging Topics in Computing, 2022.
  47. PIM-Enabled Instructions: A Low-Overhead, Locality-Aware Processing-in-Memory Architecture. ACM SIGARCH Computer Architecture News, 2015.
  48. GraphR: Accelerating Graph Processing Using ReRAM. In HPCA, 2018.
  49. Compute Caches. In HPCA, 2017.
  50. Flash-Cosmos: In-Flash Bulk Bitwise Operations Using Inherent Computation Capability of NAND Flash Memory. In MICRO, 2022.
  51. Duality Cache for Data Parallel Acceleration. In ISCA, 2019.
  52. ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMs. In MICRO, 2019.
  53. Pinatubo: A Processing-in-Memory Architecture for Bulk Bitwise Operations in Emerging Non-Volatile Memories. In DAC, 2016.
  54. RowClone: Fast and Energy-Efficient In-DRAM Bulk Data Copy and Initialization. In MICRO, 2013.
  55. Ambit: In-Memory Accelerator for Bulk Bitwise Operations Using Commodity DRAM Technology. In MICRO, 2017.
  56. ELP2IM: Efficient and Low Power Bitwise Operation Processing in DRAM. In HPCA, 2020.
  57. FracDRAM: Fractional Values in Off-the-Shelf DRAM. In MICRO, 2022.
  58. PULSAR: Simultaneous Many-Row Activation for Reliable and High-Performance Computing in Off-the-Shelf DRAM Chips. arXiv preprint arXiv:2312.02880, 2023.
  59. Functionally-Complete Boolean Logic in Real DRAM Chips: Experimental Characterization and Analysis. In HPCA. IEEE, 2024.
  60. SIMDRAM: A Framework for Bit-Serial SIMD Processing using DRAM. In ASPLOS, ASPLOS ’21, New York, NY, USA, 2021. Association for Computing Machinery.
  61. CHOPPER: A Compiler Infrastructure for Programmable Bit-Serial SIMD Processing Using Memory In DRAM. In HPCA, 2023.
  62. Swordfish: A Framework for Evaluating Deep Neural Network-Based Basecalling Using Computation-In-Memory with Non-Ideal Memristors. In MICRO, 2023.
  63. MIMDRAM: An End-to-End Processing-Using-DRAM System for High-Throughput, Energy-Efficient and Programmer-Transparent Multiple-Instruction Multiple-Data Processing. In HPCA. IEEE, 2024.
  64. Secure Hierarchy-Aware Cache Replacement Policy (SHARP) Defending Against Cache-Based Side Channel Atacks. ACM SIGARCH Computer Architecture News, 2017.
  65. NIGHTs-WATCH: a cache-based side-channel intrusion detector using hardware performance counters. In Proceedings of the 7th International Workshop on Hardware and Architectural Support for Security and Privacy, 2018.
  66. Performance Counters to Rescue: A Machine Learning based safeguard against Micro-architectural Side-Channel-Attacks. Cryptology ePrint Archive, 2017.
  67. Real-Time Detection for Cache Side Channel Attack using Performance Counter Monitor. Applied Soft Computing, 2016.
  68. Intel Corp. Intel® 64 and IA-32 Architectures Software Developer’s Manual, Vol. 1: Basic Architecture, 2016.
  69. DRAMA: Exploiting DRAM Addressing for Cross-CPU Attacks. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, August 2016. USENIX Association.
  70. SoftMC: A Flexible and Practical Open-Source Infrastructure for Enabling Experimental DRAM Studies. In HPCA, 2017.
  71. PARBOR: An Efficient System-Level Technique to Detect Data-Dependent Failures in DRAM. In DSN, 2016.
  72. A Case for Memory Content-Based Detection and Mitigation of Data-Dependent Failures in DRAM. CAL, 2016.
  73. The Efficacy of Error Mitigation Techniques for DRAM Retention Failures: A Comparative Experimental Study. In SIGMETRICS, 2014.
  74. Gather-Scatter DRAM: In-DRAM Address Translation to Improve the Spatial Locality of Non-Unit Strided Accesses. In MICRO, 2015.
  75. CLR-DRAM: A Low-Cost DRAM Architecture Enabling Dynamic Capacity-Latency Trade-Off. In ISCA, 2020.
  76. In-DRAM Bulk Bitwise Execution Engine. arXiv:1905.09822, 2019.
  77. CODIC: A Low-Cost Substrate for Enabling Custom In-DRAM Functionalities and Optimizations. In ISCA, 2021.
  78. Solar-DRAM: Reducing DRAM Access Latency by Exploiting the Variation in Local Bitlines. In ICCD, 2018.
  79. D-RaNGe: Using Commodity DRAM Devices to Generate True Random Numbers with Low Latency and High Throughput. In HPCA, 2019.
  80. Tiered-Latency DRAM: A Low Latency and Low Cost DRAM Architecture. In HPCA, 2013.
  81. Decoupled Direct Memory Access: Isolating CPU and IO Traffic by Leveraging a Dual-Data-Port DRAM. In PACT, 2015.
  82. AVATAR: A Variable-Retention-Time (VRT) Aware Refresh for DRAM Systems. In DSN, 2015.
  83. ChargeCache: Reducing DRAM Latency by Exploiting Row Access Locality. In HPCA, 2016.
  84. Understanding Latency Variation in Modern DRAM Chips: Experimental Characterization, Analysis, and Optimization. In SIGMETRICS, 2016.
  85. Design-Induced Latency Variation in Modern DRAM Chips: Characterization, Analysis, and Latency Reduction Mechanisms. In SIGMETRICS, 2017.
  86. Understanding Reduced-Voltage Operation in Modern DRAM Devices: Experimental Characterization, Analysis, and Mechanisms. In SIGMETRICS, 2017.
  87. B. Keeth and R.J. Baker. DRAM Circuit Design: A Tutorial. 2001.
  88. One bit flips, one cloud flops: Cross-VM row hammer attacks and privilege escalation. In 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, August 2016. USENIX Association.
  89. RAMBleed: Reading Bits in Memory Without Accessing Them. In 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 2020.
  90. Whispers in the hyper-space: high-speed covert channel attacks in the cloud. In USENIX Security symposium, pages 159–173, 2012.
  91. Security implications of memory deduplication in a virtualized environment. In DSN. IEEE, 2013.
  92. A Memory-Deduplication Side-Channel Attack to Detect Applications in Co-Resident Virtual Machines. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, 2018.
  93. John Kelsey et al. Compression and information leakage of plaintext. In FSE, volume 2, pages 263–276. Springer, 2002.
  94. Crime: Compression ratio info-leak made easy. In ekoparty Security Conference, 2012.
  95. Breach: reviving the crime attack. Unpublished manuscript, 2013.
  96. A perfect crime? only time will tell. Black Hat Europe, 2013, 2013.
  97. HEIST: HTTP Encrypted Information can be Stolen through TCP-windows. In Black Hat US Briefings, Location: Las Vegas, USA, 2016.
  98. Request and conquer: Exposing cross-origin resource size. In 25th {normal-{\{{USENIX}normal-}\}} Security Symposium ({normal-{\{{USENIX}normal-}\}} Security 16), pages 447–462, 2016.
  99. Practical new developments on breach. Black Hat Asia, pages 1–11, 2016.
  100. Safecracker: Leaking secrets through compressed caches. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 1125–1140, 2020.
  101. Practical timing side channel attacks on memory compression. arXiv preprint arXiv:2111.08404, 2021.
  102. Bankrupt Covert Channel: Turning Network Predictability into Vulnerability. In Proceedings of the 14th USENIX Conference on Offensive Technologies, 2020.
  103. Cacti 6.0: A tool to model large caches. HP laboratories, 27:28, 2009.
  104. PIM-enabled Instructions: A Low-overhead, Locality-aware Processing-in-memory Architecture. In ISCA, 2015.
  105. Drammer: Deterministic Rowhammer Attacks on Mobile Platforms. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016.
  106. Victor van der Veen and Ben Gras. DramaQueen: Revisiting Side Channels in DRAM. In DRAMSec, 2023.
  107. Genomic Research and Human Subject Privacy. Science, 305(5681):183–183, 2004.
  108. RADAR: A 3D-ReRAM based DNA Alignment Accelerator Architecture. In DAC, 2018.
  109. GeNVoM: Read Mapping Near Non-Volatile Memory. TCBB, 2021.
  110. GenASM: A High-Performance, Low-Power Approximate String Matching Acceleration Framework for Genome Sequence Analysis. In MICRO, 2020.
  111. RAPID: A ReRAM Processing In-memory Architecture for DNA Sequence Alignment. In ISLPED, 2019.
  112. PIM-Align: A Processing-in-Memory Architecture for FM-Index Search Algorithm. Journal of Computer Science and Technology, 2021.
  113. Aligns: A Processing-in-memory Accelerator for DNA Short Read Alignment Leveraging SOT-MRAM. In DAC, 2019.
  114. Aligner: A Process-in-memory Architecture for Short Read Alignment in ReRAMs. IEEE Computer Architecture Letters, 2018.
  115. Heng Li. Minimap2: Pairwise Alignment for Nucleotide Sequences. Bioinformatics, 2018.
  116. J. Marchini and B. Howie. Genotype imputation for genome-wide association studies. In Nat Rev Genet, 2010.
  117. Achieving improved accuracy for imputation of ancient DNA. Bioinformatics, 2022.
  118. Sniper: Exploring the Level of Abstraction for Scalable and Accurate Parallel Multi-Core Simulations. In SC 2011, 2011.
  119. ×86 computer architecture simulators: A comparative study. 2016 IEEE 34th International Conference on Computer Design (ICCD), 2016.
  120. Streamline: A Fast, Flushless Cache Covert-Channel Attack by Enabling Asynchronous Collusion. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021.
  121. Hermes: Accelerating Long-Latency Load Requests via Perceptron-Based Off-Chip Load Prediction. In MICRO, 2022.
  122. GraphBIG: Understanding Graph Computing in the Context of Industrial Solutions. In SC ’15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2015.
  123. Stride Directed Prefetching in Scalar Processors. In MICRO, 1992.
  124. High Performance Cache Replacement Using Re-Reference Interval Prediction (RRIP). ACM SIGARCH Computer Architecture News, 2010.
  125. Effective Hardware-Based Data Prefetching for High-Performance Processors. In IEEE TC, 1995.
  126. Security Challenges of Processing-In-Memory Systems. In GLSVLSI, New York, NY, USA, 2020. Association for Computing Machinery.
  127. SCARE: Side Channel Attack on In-Memory Computing for Reverse Engineering. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2021.
  128. Side-Channel Attack Analysis on In-Memory Computing Architectures. IEEE Transactions on Emerging Topics in Computing, 2023.
  129. Cache-Timing Template Attacks. In ASIACRYPT, 2009.
  130. CC-Hunter: Uncovering Covert Timing Channels on Shared Processor Hardware. In MICRO. IEEE, 2014.
  131. C5: cross-cores cache covert channel. In Detection of Intrusions and Malware, and Vulnerability Assessment: 12th International Conference, DIMVA 2015, Milan, Italy, July 9-10, 2015, Proceedings 12, pages 46–64. Springer, 2015.
  132. ReplayConfusion: Detecting cache-based covert channel attacks using record and replay. In MICRO. IEEE, 2016.
  133. TPPD: Targeted Pseudo Partitioning based Defence for cross-core covert channel attacks. Journal of Systems Architecture, 2023.
  134. Flush+ Flush: A Fast and Stealthy Cache Attack. In DIMVA, 2016.
  135. FLUSH+ RELOAD: A High Resolution, Low Noise, L3 Cache Side-Channel Attack. In USENIX Security Symposium, 2014.
  136. Leaking Information Through Cache LRU States. In HPCA. IEEE, 2020.
  137. Onur Aciicmez. Yet Another MicroArchitectural Attack: Exploiting I-cache. In ACM-CCS CSAW, 2007.
  138. Leaking Information Through Cache LRU States in Commercial Processors and Secure Caches. IEEE Transactions on Computers, 2021.
  139. A New Prime and Probe Cache Side-Channel Attack for Cloud Computing. In CIT. IEEE, 2015.
  140. PACMAN: Attacking ARM Pointer Authentication with Speculative Execution. In Proceedings of the 49th Annual International Symposium on Computer Architecture, 2022.
  141. DAGguise: Mitigating Memory Timing Side Channels. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2022.
  142. SecNDP: Secure Near-Data Processing with Untrusted Memory. In HPCA, 2022.
  143. Eliminating Micro-Architectural Side-Channel Attacks using Near Memory Processing. In 2022 IEEE International Symposium on Secure and Private Execution Environment Design (SEED). IEEE, 2022.
  144. InvisiMem: Smart Memory Defenses for Memory Bus Side Channel. In ISCA, 2017.
  145. ObfusMem: A Low-Overhead Access Obfuscation for Trusted Memories. In ISCA, 2017.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube