Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimal Algorithmic Information Loss Methods for Dimension Reduction, Feature Selection and Network Sparsification (1802.05843v11)

Published 16 Feb 2018 in cs.DS, cs.IT, math.IT, and physics.soc-ph

Abstract: We present a novel, domain-agnostic, model-independent, unsupervised, and universally applicable approach for data summarization. Specifically, we focus on addressing the challenge of reducing certain dimensionality aspects, such as the number of edges in a network, while retaining essential features of interest. These features include preserving crucial network properties like degree distribution, clustering coefficient, edge betweenness, and degree and eigenvector centralities. Our approach outperforms state-of-the-art network reduction techniques by achieving an average improvement in feature preservation. Previous methods grounded in statistics or classical information theory have been limited in their ability to capture more intricate patterns and features, particularly nonlinear patterns stemming from deterministic computable processes. Moreover, these approaches heavily rely on a priori feature selection, demanding constant supervision. Our findings demonstrate the effectiveness of the algorithms proposed in this study in overcoming these limitations, all while maintaining a time-efficient computational profile. In many instances, our approach not only matches but also surpasses the performance of established network reduction algorithms. Furthermore, we extend the applicability of our method to lossy compression tasks involving images or any bi-dimensional data. This highlights the versatility and broad utility of our approach in various domains.

Citations (15)

Summary

We haven't generated a summary for this paper yet.