Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integration of TinyML and LargeML: A Survey of 6G and Beyond (2505.15854v1)

Published 20 May 2025 in cs.NI, cs.AI, cs.ET, cs.LG, and cs.MA

Abstract: The transition from 5G networks to 6G highlights a significant demand for ML. Deep learning models, in particular, have seen wide application in mobile networking and communications to support advanced services in emerging wireless environments, such as smart healthcare, smart grids, autonomous vehicles, aerial platforms, digital twins, and the metaverse. The rapid expansion of Internet-of-Things (IoT) devices, many with limited computational capabilities, has accelerated the development of tiny machine learning (TinyML) and resource-efficient ML approaches for cost-effective services. However, the deployment of large-scale machine learning (LargeML) solutions require major computing resources and complex management strategies to support extensive IoT services and ML-generated content applications. Consequently, the integration of TinyML and LargeML is projected as a promising approach for future seamless connectivity and efficient resource management. Although the integration of TinyML and LargeML shows abundant potential, several challenges persist, including performance optimization, practical deployment strategies, effective resource management, and security considerations. In this survey, we review and analyze the latest research aimed at enabling the integration of TinyML and LargeML models for the realization of smart services and applications in future 6G networks and beyond. The paper concludes by outlining critical challenges and identifying future research directions for the holistic integration of TinyML and LargeML in next-generation wireless networks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Thai-Hoc Vu (5 papers)
  2. Ngo Hoang Tu (3 papers)
  3. Thien Huynh-The (23 papers)
  4. Kyungchun Lee (15 papers)
  5. Sunghwan Kim (28 papers)
  6. Miroslav Voznak (7 papers)
  7. Quoc-Viet Pham (66 papers)