Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Guardian: Safe GPU Sharing in Multi-Tenant Environments (2401.09290v2)

Published 17 Jan 2024 in cs.DC

Abstract: Modern GPU applications, such as ML, can only partially utilize GPUs, leading to GPU underutilization in cloud environments. Sharing GPUs across multiple applications from different tenants can improve resource utilization and consequently cost, energy, and power efficiency. However, GPU sharing creates memory safety concerns because kernels must share a single GPU address space. Existing spatial-sharing mechanisms either lack fault isolation for memory accesses or require static partitioning, which leads to limited deployability or low utilization. In this paper, we present Guardian, a PTX-level bounds checking approach that provides memory isolation and supports dynamic GPU spatial-sharing. Guardian relies on three mechanisms: (1) It divides the common GPU address space into separate partitions for different applications. (2) It intercepts and checks all GPU related calls at the lowest level, fencing erroneous operations. (3) It instruments all GPU kernels at the PTX level -- available in closed GPU libraries -- fencing all kernel memory accesses outside application memory bounds. Guardian's approach is transparent to applications and supports real-life frameworks, such as Caffe and PyTorch, that issue billions of GPU kernels. Our evaluation shows that Guardian's overhead compared to native for such frameworks is between 4% - 12% and on average 9%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (74)
  1. Baggy Bounds Checking: An Efficient and Backwards-Compatible Defense against Out-of-Bounds Errors. In USENIX Security ’09.
  2. Low overhead instruction latency characterization for nvidia gpgpus. In HPEC’19.
  3. Exploiting inter-warp heterogeneity to improve GPGPU performance. In PACT ’15.
  4. Mosaic: a GPU memory manager with application-transparent support for multiple page sizes. In MICRO ’17.
  5. MASK: Redesigning the GPU Memory Hierarchy to Support Multi-Application Concurrency. In ASPLOS ’18.
  6. Is Data Placement Optimization Still Relevant On Newer GPUs?. In U.S. Department of Energy Office of Scientific and Technical Information.
  7. Balancing Efficiency and Fairness in Heterogeneous GPU Clusters for Deep Learning. In EuroSys ’20.
  8. Rodinia: A Benchmark Suite for Heterogeneous Computing. In IISWC ’09.
  9. GSLICE: Controlled Spatial Sharing of GPUs for a Scalable Inference Platform. In SoCC ’20.
  10. A Study of Overflow Vulnerabilities on GPUs. In NPC’ 16.
  11. GMOD: A Dynamic GPU Memory Overflow Detector. In PACT ’18.
  12. Enabling CUDA acceleration within virtual machines using rCUDA. In HiPC ’11.
  13. Stack Bounds Protection with Low Fat Pointers. In NDSS ’17.
  14. Cricket: A virtualization layer for distributed execution of CUDA applications with checkpoint/restart support. In Concurrency and Computation: Practice and Experience.
  15. Dynamic buffer overflow detection for GPGPUs. In CGO ’17.
  16. Martín Abadi et. al. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. https://www.tensorflow.org/ Software available from tensorflow.org.
  17. DGSF: Disaggregated GPUs for Serverless Functions. In IPDPS ’22.
  18. Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor. In MICRO ’12.
  19. Planaria: Dynamic architecture fission for spatial multi-tenant acceleration of deep neural networks. In MICRO’20.
  20. Demystifying the Placement Policies of the NVIDIA GPU Thread Block Scheduler for Concurrent Kernels. In SIGMETRICS ’21.
  21. Vinod Grover and Yuan Lin. 2012. Compiling CUDA and other languages for GPUs. In GTC ’12.
  22. Caffe: Convolutional Architecture for Fast Feature Embedding. In ArXiv.
  23. Dissecting the NVIDIA volta GPU architecture via microbenchmarking. In ArXiv.
  24. Learning multiple layers of features from tiny images.
  25. Delta pointers: Buffer overflow checks without the checks. In EuroSys ’18.
  26. AlloX: Compute Allocation in Hybrid Clusters. In EuroSys ’20.
  27. Gradient-based learning applied to document recognition. In Proceedings of the IEEE.
  28. Securing GPU via Region-Based Bounds Checking. In ISCA ’22.
  29. GREEN Cache: Exploiting the Disciplined Memory Model of OpenCL on GPUs. In IEEE Transactions on Computers.
  30. Stealing Webpages Rendered on Your Browser by Exploiting GPU Vulnerabilities. In S&P ’14.
  31. MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant GPU Clusters. In SoCC ’22.
  32. Automatic data placement into GPU on-chip memory resources. In CGO ’15.
  33. Honeycomb: Secure and Efficient GPU Executions via Static Validation. In OSDI 23.
  34. Xinxin Mei and Xiaowen Chu. 2017. Dissecting GPU Memory Hierarchy Through Microbenchmarking. In TPDS ’17.
  35. Benchmarking the memory hierarchy of modern GPUs. In NPC ’14.
  36. Andrea Miele. 2015. Buffer overflow vulnerabilities in CUDA: a preliminary analysis. In Journal of Computer Virology and Hacking Techniques.
  37. Thomas Moscibroda and Onur Mutlu. 2007. Memory Performance Attacks: Denial of Memory Service in Multi-Core Systems. In USENIX Security ’07.
  38. Accelerated serverless computing based on GPU virtualization. J. Parallel and Distrib. Comput.
  39. Going beyond the Limits of SFI: Flexible and Secure Hardware-Assisted In-Process Isolation with HFI. In ASPLOS ’23.
  40. NVIDIA. 2021. CUDA Binary Utilities. Retrieved May 2023 from https://docs.nvidia.com/cuda/pdf/CUDA_Binary_Utilities.pdf
  41. NVIDIA. 2022a. CUDALibrarySample. Retrieved April 2023 from https://github.com/NVIDIA/CUDALibrarySamples/tree/master/
  42. NVIDIA. 2022b. Multi-Instance GPU. Retrieved April 2023 from https://docs.nvidia.com/datacenter/tesla/pdf/NVIDIA_MIG_User_Guide.pdf
  43. NVIDIA. 2022c. Multi-Process Service. Retrieved May 2023 from https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf
  44. NVIDIA. 2022d. Parallel Thread Execution ISA. Retrieved May 2023 from https://docs.nvidia.com/cuda/pdf/ptx_isa_8.1.pdf
  45. Intel mpx explained: A cross-layer analysis of the intel mpx system stack. In POMACS ’18.
  46. Meni Orenbach and Mark Silberstein. [n. d.]. Enclaves as accelerators: learning lessons from gpu computing for designing efficient runtimes for enclaves.
  47. Improving GPGPU Concurrency with Elastic Kernels. In ASPLOS ’13.
  48. Dynamic Resource Management for Efficient Utilization of Multitasking GPUs. In ASPLOS ’17.
  49. Mind Control Attack: Undermining Deep Learning with GPU Memory Exploitation. In Computers and Security.
  50. Automated GPU Out-of-Bound Access Detection and Prevention in a Managed Environment. In ArXiv.
  51. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NIPs ’19.
  52. Arax: A Runtime Framework for Decoupling Applications from Heterogeneous Accelerators. In SoCC ’22.
  53. TReM: A Task Revocation Mechanism for GPUs. In HPCC ’20.
  54. CUDA Leaks: A Detailed Hack for CUDA and a (Partial) Fix. In TECS ’16.
  55. ImageNet Large Scale Visual Recognition Challenge. In IJCV ’15.
  56. ActivePointers: A Case for Software Address Translation on GPUs. In ISCA ’16.
  57. MISE: Providing Performance Predictability and Improving Fairness in Shared Main Memory Systems. In HPCA ’13.
  58. CuCatch: A Debugging Tool for Efficiently Catching Memory Safety Violations in CUDA Applications. In PLDI ’23.
  59. Lukas Tobler. 2022. Gpuless–serverless gpu functions. In Master Thesis.
  60. Zorua: A Holistic Approach to Resource Virtualization in GPUs. In MICRO ’16.
  61. A Framework for Accelerating Bottlenecks in GPU Execution with Assist Warps. In ArXiv.
  62. NVBit: A Dynamic Binary Instrumentation Framework for NVIDIA GPUs. In MICRO ’19.
  63. Graviton: Trusted Execution Environments on GPUs. In OSDI ’18.
  64. Simultaneous multikernel GPU: Multi-tasking throughput processors via fine-grained sharing. In HPCA ’16.
  65. Multi-threaded kernel offloading to gpgpu using hyper-q on kepler architecture. In ArXiv.
  66. MLaaS in the wild: Workload analysis and scheduling in Large-Scale heterogeneous GPU clusters. In NSDI ’22.
  67. GPUCC - An Open-Source GPGPU Compiler. In CGO ’16.
  68. Gandiva: Introspective Cluster Scheduling for Deep Learning. In OSDI ’18.
  69. AntMan: Dynamic Scaling on GPU Clusters for Deep Learning. In OSDI ’20.
  70. GPUGuard: Mitigating Contention Based Side and Covert Channel Attacks on GPUs. In ICS’19.
  71. Pagoda: Fine-grained gpu resource virtualization for narrow tasks. In PPoPP ’17.
  72. A survey of multi-tenant deep learning inference on GPU. In ArXiv.
  73. AvA: Accelerated Virtualization of Accelerators. In ASPLOS ’20.
  74. G-Net: Effective GPU Sharing in NFV Systems. In NSDI’18.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Manos Pavlidakis (2 papers)
  2. Giorgos Vasiliadis (5 papers)
  3. Stelios Mavridis (2 papers)
  4. Anargyros Argyros (1 paper)
  5. Antony Chazapis (6 papers)
  6. Angelos Bilas (13 papers)

Summary

We haven't generated a summary for this paper yet.