Papers
Topics
Authors
Recent
2000 character limit reached

AMP4EC: Adaptive Model Partitioning Framework for Efficient Deep Learning Inference in Edge Computing Environments (2504.00407v2)

Published 1 Apr 2025 in cs.DC

Abstract: Edge computing facilitates deep learning in resource-constrained environments, but challenges such as resource heterogeneity and dynamic constraints persist. This paper introduces AMP4EC, an Adaptive Model Partitioning framework designed to optimize deep learning inference in edge environments through real-time resource monitoring, dynamic model partitioning, and adaptive task scheduling. AMP4EC features a resource-aware model partitioner that splits deep learning models based on device capabilities, a task scheduler that ensures efficient load balancing using a weighted scoring mechanism, and a Docker-based deployment environment for validation. Experimental results show up to a 78% reduction in latency and a 414% improvement in throughput compared to baseline methods. The framework achieves consistent performance with low scheduling overhead across varying resource profiles, demonstrating adaptability in high-resource (1 CPU, 1GB RAM) and low-resource (0.4 CPU, 512MB RAM) scenarios. These results highlight AMP4EC's scalability, efficiency, and robustness for real-world edge deployments, addressing the critical need for efficient distributed inference in dynamic, resource-constrained environments.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 0 likes about this paper.