Learning-based Lossless Event Data Compression (2411.03010v1)
Abstract: Emerging event cameras acquire visual information by detecting time domain brightness changes asynchronously at the pixel level and, unlike conventional cameras, are able to provide high temporal resolution, very high dynamic range, low latency, and low power consumption. Considering the huge amount of data involved, efficient compression solutions are very much needed. In this context, this paper presents a novel deep-learning-based lossless event data compression scheme based on octree partitioning and a learned hyperprior model. The proposed method arranges the event stream as a 3D volume and employs an octree structure for adaptive partitioning. A deep neural network-based entropy model, using a hyperprior, is then applied. Experimental results demonstrate that the proposed method outperforms traditional lossless data compression techniques in terms of compression ratio and bits per event.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.