Minimal Algorithmic Information Loss Methods for Dimension Reduction, Feature Selection and Network Sparsification (1802.05843v11)
Abstract: We present a novel, domain-agnostic, model-independent, unsupervised, and universally applicable approach for data summarization. Specifically, we focus on addressing the challenge of reducing certain dimensionality aspects, such as the number of edges in a network, while retaining essential features of interest. These features include preserving crucial network properties like degree distribution, clustering coefficient, edge betweenness, and degree and eigenvector centralities. Our approach outperforms state-of-the-art network reduction techniques by achieving an average improvement in feature preservation. Previous methods grounded in statistics or classical information theory have been limited in their ability to capture more intricate patterns and features, particularly nonlinear patterns stemming from deterministic computable processes. Moreover, these approaches heavily rely on a priori feature selection, demanding constant supervision. Our findings demonstrate the effectiveness of the algorithms proposed in this study in overcoming these limitations, all while maintaining a time-efficient computational profile. In many instances, our approach not only matches but also surpasses the performance of established network reduction algorithms. Furthermore, we extend the applicability of our method to lossy compression tasks involving images or any bi-dimensional data. This highlights the versatility and broad utility of our approach in various domains.