EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices (2308.08717v1)
Abstract: Real-time video analytics on edge devices for changing scenes remains a difficult task. As edge devices are usually resource-constrained, edge deep neural networks (DNNs) have fewer weights and shallower architectures than general DNNs. As a result, they only perform well in limited scenarios and are sensitive to data drift. In this paper, we introduce EdgeMA, a practical and efficient video analytics system designed to adapt models to shifts in real-world video streams over time, addressing the data drift problem. EdgeMA extracts the gray level co-occurrence matrix based statistical texture feature and uses the Random Forest classifier to detect the domain shift. Moreover, we have incorporated a method of model adaptation based on importance weighting, specifically designed to update models to cope with the label distribution shift. Through rigorous evaluation of EdgeMA on a real-world dataset, our results illustrate that EdgeMA significantly improves inference accuracy.
- Liang Wang (512 papers)
- Nan Zhang (144 papers)
- Xiaoyang Qu (41 papers)
- Jianzong Wang (144 papers)
- Jiguang Wan (15 papers)
- Guokuan Li (8 papers)
- Kaiyu Hu (3 papers)
- Guilin Jiang (5 papers)
- Jing Xiao (267 papers)