Power trade-off between on-device AR inference and offloading
Determine, for augmented reality applications on smartphones that perform deep neural network analytics on each camera frame, whether executing inference locally on the mobile GPU or offloading inference to edge GPU servers minimizes mobile-device battery consumption, explicitly accounting for the wireless network interface energy required to upload camera frames.
References
From a power perspective, it is unclear which approach may consume more battery in the mobile device.
                — Automated PMC-based Power Modeling Methodology for Modern Mobile GPUs
                
                (2408.04886 - Dash et al., 9 Aug 2024) in Section 6.2