Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence (1909.00560v2)

Published 2 Sep 2019 in cs.NI and cs.DC
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence

Abstract: Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, Edge Computing, is surging in popularity. Meanwhile, AI applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate Edge Computing and AI, which gives birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial Intelligence on Edge). The former focuses on providing more optimal solutions to key problems in Edge Computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This paper provides insights into this new inter-disciplinary field from a broader perspective. It discusses the core concepts and the research road-map, which should provide the necessary background for potential future research initiatives in Edge Intelligence.

Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence

The research paper "Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence" explores the emerging integration of Edge Computing (EC) and AI, termed Edge Intelligence (EI). This synthesis is pivotal given the exponential growth of data generated at the network edge due to advancements in communication technologies and an increase in mobile device usage. The paper provides a structured division of EI into two categories: Intelligence-enabled Edge Computing (IEC) and Artificial Intelligence on Edge (AIE), along with a comprehensive research road-map.

Core Ideas and Structure

The paper initiates by highlighting the symbiotic relationship between EC and AI. It underscores the necessity of handling voluminous data at the network edge, circumventing excessive network congestion that traditional cloud computations may face. The discussion delineates EC's shift of computation and communication resources closer to the user, optimizing latency and response times. In parallel, the advancements in AI, particularly deep learning architectures and hardware improvements, provide the computational backbone needed for effective EI.

Divisions of Edge Intelligence

EI is methodically divided into:

  • AI for Edge (Intelligence-enabled Edge Computing): This aspect targets leveraging AI to solve complex issues in EC, enhancing its performance and efficiency. It explores various facets like wireless networking, service provisioning, and computation offloading, applying AI-driven optimization tools such as reinforcement learning and deep learning techniques.
  • AI on Edge (AI model execution at the edge): Focused on running AI models directly on edge devices, AIE addresses the complete lifecycle from model training to inference, emphasizing frameworks that ensure privacy, cost-effectiveness, and efficiency. Federated Learning is spotlighted as a pivotal framework that maintains data privacy by training models on decentralized data sources.

Implications and State of the Art

The practical implementation of Edge Intelligence has profound implications across various domains. In telecommunications, AI applications for wireless networking facilitate intelligent resource allocation, as evidenced by works on power control using Graph Neural Networks (GNNs). In computing resource management, Distributed Reinforcement Learning (DRL) is used to optimize computation offloading strategies, enhancing the interplay between edge and cloud systems.

The paper reviews the state of the art in several categories, elucidating:

  • Wireless Networking and Computation Offloading: Leveraging AI technologies like DRL to optimize network resources and computation tasks, ensuring enhanced user experience through efficient data transmission and reduced delays.
  • Service Placement and Caching: AI is employed to strategically cache and deploy services, ameliorating latency and service accessibility, using approaches like Multi-armed Bandit (MAB) algorithms.
  • Model Adaptation for AI on Edge: Efforts to compress model sizes and reduce computational loads using quantization, conditional computation, and other techniques make AI more feasible for resource-constrained edge devices.

Research Roadmap and Challenges

The roadmap for EI presented in the paper methodically categorizes research into Topology, Content, and Service for IEC, and Model Adaptation, Framework Design, and Processor Acceleration for AIE. Challenges are identified, such as:

  • The complexities in model establishment due to constraints in optimization problems.
  • Algorithm deployment difficulties on resource-limited edge devices.
  • The balance between achieving optimal solutions and maintaining system efficiency.

Conclusion and Future Directions

The convergence of Edge Computing and AI into Edge Intelligence is portrayed as a multi-dimensional paradigm with vast research trajectories and challenges. The paper suggests potential future developments, such as refining coordination mechanisms between heterogeneous devices and devising more robust frameworks for model training and inference at the edge. The insights addressed pave the way for advancing edge-centric AI technologies, fostering applications that are both performance-effective and resource cognizant.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shuiguang Deng (45 papers)
  2. Hailiang Zhao (16 papers)
  3. Weijia Fang (1 paper)
  4. Jianwei Yin (71 papers)
  5. Schahram Dustdar (72 papers)
  6. Albert Y. Zomaya (50 papers)
Citations (560)