Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Energy Efficient VM Placement in a Heterogeneous Fog Computing Architecture (2203.14178v1)

Published 27 Mar 2022 in cs.NI and eess.SP

Abstract: Recent years have witnessed a remarkable development in communication and computing systems, mainly driven by the increasing demands of data and processing intensive applications such as virtual reality, M2M, connected vehicles, IoT services, to name a few. Massive amounts of data will be collected by various mobile and fixed terminals that will need to be processed in order to extract knowledge from the data. Traditionally, a centralized approach is taken for processing the collected data using large data centers connected to a core network. However, due to the scale of the Internet-connected things, transporting raw data all the way to the core network is costly in terms of the power consumption, delay, and privacy. This has compelled researchers to propose different decentralized computing paradigms such as fog computing to process collected data at the network edge close to the terminals and users. In this paper, we study, in a Passive Optical Network (PON)-based collaborative-fog computing system, the impact of the heterogeneity of the fog units capacity and energy-efficiency on the overall energy-efficiency of the fog system. We optimized the virtual machine (VM) placement in this fog system with three fog cells and formulated the problem as a mixed integer linear programming (MILP) optimization model with the objective of minimizing the networking and processing power consumption of the fog system. The results indicate that in our proposed architecture, the processing power consumption is the crucial element to achieve energy efficient VMs placement.

Citations (1)

Summary

We haven't generated a summary for this paper yet.