Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GEMEL: Model Merging for Memory-Efficient, Real-Time Video Analytics at the Edge (2201.07705v2)

Published 19 Jan 2022 in cs.DC and cs.AI

Abstract: Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy violations. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, GEMEL, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that GEMEL reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time/space sharing alone.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Arthi Padmanabhan (4 papers)
  2. Neil Agarwal (4 papers)
  3. Anand Iyer (9 papers)
  4. Ganesh Ananthanarayanan (14 papers)
  5. Yuanchao Shu (14 papers)
  6. Nikolaos Karianakis (10 papers)
  7. Guoqing Harry Xu (7 papers)
  8. Ravi Netravali (22 papers)
Citations (44)

Summary

We haven't generated a summary for this paper yet.