Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The OoO VLIW JIT Compiler for GPU Inference (1901.10008v2)

Published 28 Jan 2019 in cs.DC and cs.LG

Abstract: Current trends in Machine Learning~(ML) inference on hardware accelerated devices (e.g., GPUs, TPUs) point to alarmingly low utilization. As ML inference is increasingly time-bounded by tight latency SLOs, increasing data parallelism is not an option. The need for better efficiency motivates GPU multiplexing. Furthermore, existing GPU programming abstractions force programmers to micro-manage GPU resources in an early-binding, context-free fashion. We propose a VLIW-inspired Out-of-Order (OoO) Just-in-Time (JIT) compiler that coalesces and reorders execution kernels at runtime for throughput-optimal device utilization while satisfying latency SLOs. We quantify the inefficiencies of space-only and time-only multiplexing alternatives and demonstrate an achievable 7.7x opportunity gap through spatial coalescing.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Paras Jain (14 papers)
  2. Xiangxi Mo (12 papers)
  3. Ajay Jain (16 papers)
  4. Alexey Tumanov (30 papers)
  5. Joseph E. Gonzalez (167 papers)
  6. Ion Stoica (177 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.