Papers
Topics
Authors
Recent
Search
2000 character limit reached

NAOMI: Non-Autoregressive Multiresolution Sequence Imputation

Published 30 Jan 2019 in cs.LG and stat.ML | (1901.10946v3)

Abstract: Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training. When evaluated extensively on benchmark datasets from systems of both deterministic and stochastic dynamics. NAOMI demonstrates significant improvement in imputation accuracy (reducing average prediction error by 60% compared to autoregressive counterparts) and generalization for long range sequences.

Citations (101)

Summary

Non-Autoregressive Multiresolution Imputation (NAOMI) for Long-Range Sequence Imputation

The paper "NAOMI: Non-Autoregressive Multiresolution Sequence Imputation" introduces a novel approach to tackling the challenge of missing value imputation in spatiotemporal datasets. Traditional deep autoregressive models often face issues related to error propagation, particularly when dealing with long-range sequences. This paper proposes an alternative methodology by leveraging a non-autoregressive model framework, specifically targeting the imputation of long-range sequences with arbitrary missing patterns.

NAOMI, which stands for Non-AutOregressive Multiresolution Imputation, employs a deep generative model that capitalizes on the inherent multiresolution properties of spatiotemporal data. The model adopts a divide-and-conquer strategy, recursively decoding sequences from coarse to fine-grained resolutions. This recursive approach mitigates the compounding error issue characteristic of autoregressive models. Furthermore, the paper enhances NAOMI's performance through adversarial training techniques, which have shown to improve generative model robustness by utilizing adversarial networks.

The empirical validation of NAOMI is conducted through extensive experiments on benchmark datasets involving systems with both deterministic and stochastic dynamics. These experiments highlight significant improvements in the accuracy of imputation tasks, with NAOMI reducing average error rates by approximately 60% compared to traditional autoregressive models. Such results underline the model's enhanced capability in generalizing over long-range sequences without the catastrophic error propagation associated with autoregressive methodologies.

From a theoretical perspective, the success of NAOMI suggests that non-autoregressive models can effectively address problems associated with sequence imputation, particularly in scenarios requiring the handling of missing and complex pattern data characteristics. Practically, the implications of adopting NAOMI could extend to various domains, including motion tracking, weather forecasting, and other fields reliant upon spatiotemporal data integrity and accuracy.

Looking forward, the development of non-autoregressive approaches like NAOMI paves the way for further research in AI, where models could be further optimized for increased accuracy and efficiency. Future studies might explore the integration of more advanced adversarial training schemes or examine the adaptability of NAOMI across a broader spectrum of spatiotemporal datasets. The potential to refine these models for real-time applications also remains a promising avenue, which could significantly impact AI applications within dynamic environments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.