Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A probabilistic database approach to autoencoder-based data cleaning (2106.09764v2)

Published 17 Jun 2021 in cs.DB, cs.AI, and cs.LG

Abstract: Data quality problems are a large threat in data science. In this paper, we propose a data-cleaning autoencoder capable of near-automatic data quality improvement. It learns the structure and dependencies in the data and uses it as evidence to identify and correct doubtful values. We apply a probabilistic database approach to represent weak and strong evidence for attribute value repairs. A theoretical framework is provided, and experiments show that it can remove significant amounts of noise (i.e., data quality problems) from categorical and numeric probabilistic data. Our method does not require clean data. We do, however, show that manually cleaning a small fraction of the data significantly improves performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. R. R. Mauritz (1 paper)
  2. F. P. J. Nijweide (1 paper)
  3. J. Goseling (1 paper)
  4. M. van Keulen (1 paper)
Citations (1)

Summary

We haven't generated a summary for this paper yet.