Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CODA: Temporal Domain Generalization via Concept Drift Simulator (2310.01508v1)

Published 2 Oct 2023 in cs.LG and stat.ML

Abstract: In real-world applications, machine learning models often become obsolete due to shifts in the joint distribution arising from underlying temporal trends, a phenomenon known as the "concept drift". Existing works propose model-specific strategies to achieve temporal generalization in the near-future domain. However, the diverse characteristics of real-world datasets necessitate customized prediction model architectures. To this end, there is an urgent demand for a model-agnostic temporal domain generalization approach that maintains generality across diverse data modalities and architectures. In this work, we aim to address the concept drift problem from a data-centric perspective to bypass considering the interaction between data and model. Developing such a framework presents non-trivial challenges: (i) existing generative models struggle to generate out-of-distribution future data, and (ii) precisely capturing the temporal trends of joint distribution along chronological source domains is computationally infeasible. To tackle the challenges, we propose the COncept Drift simulAtor (CODA) framework incorporating a predicted feature correlation matrix to simulate future data for model training. Specifically, CODA leverages feature correlations to represent data characteristics at specific time points, thereby circumventing the daunting computational costs. Experimental results demonstrate that using CODA-generated data as training input effectively achieves temporal domain generalization across different model architectures.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Temporal domain generalization with drift-aware dynamic neural networks. arXiv preprint arXiv:2205.10664, 2022.
  2. Normalized wasserstein for mixture distributions with applications in adversarial learning and domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  6500–6508, 2019.
  3. Data driven prediction models of energy use of appliances in a low-energy house. Energy and buildings, 140:81–97, 2017.
  4. Domain generalization by mutual-information regularization with pre-trained models. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, pp.  440–457. Springer, 2022.
  5. A multi-step-ahead markov conditional forward model with cube perturbations for extreme weather forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  6948–6955, 2021.
  6. Dispel: Domain generalization via domain-specific liberating. arXiv preprint arXiv:2307.07181, 2023.
  7. Learning to balance specificity and invariance for in and out of domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, pp.  301–318. Springer, 2020.
  8. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:17864–17875, 2021.
  9. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  7373–7382, 2021.
  10. Domain generalization via model-agnostic learning of semantic features. Advances in neural information processing systems, 32, 2019.
  11. A proactive intelligent decision support system for predicting the popularity of online news. In Progress in Artificial Intelligence: 17th Portuguese Conference on Artificial Intelligence, EPIA 2015, Coimbra, Portugal, September 8-11, 2015. Proceedings 17, pp.  535–546. Springer, 2015.
  12. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  13. Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  2477–2486, 2019.
  14. Revisiting deep learning models for tabular data. Advances in Neural Information Processing Systems, 34:18932–18943, 2021.
  15. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
  16. Self-challenging improves cross-domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp.  124–140. Springer, 2020.
  17. Generalized demographic parity for group fairness. In International Conference on Learning Representations, 2021.
  18. Weight perturbation can help fairness under distribution shift. arXiv preprint arXiv:2303.03300, 2023.
  19. Towards open world object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5830–5840, 2021.
  20. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  4893–4902, 2019.
  21. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems, 30, 2017.
  22. Domain generalization and adaptation using low rank exemplar svms. IEEE transactions on pattern analysis and machine intelligence, 40(5):1114–1127, 2017.
  23. A unified feature disentangler for multi-domain image translation and manipulation. Advances in neural information processing systems, 31, 2018.
  24. Goggle: Generative modelling for tabular data by learning relational structure. In The Eleventh International Conference on Learning Representations, 2022.
  25. Source-free domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  1215–1224, 2021.
  26. Open compound domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12406–12415, 2020.
  27. Stochastic classifiers for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9111–9120, 2020.
  28. Best sources forward: domain generalization through source-specific nets. In 2018 25th IEEE international conference on image processing (ICIP), pp.  1353–1357. IEEE, 2018.
  29. Adagraph: Unifying predictive and continuous domain adaptation through graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  6568–6577, 2019.
  30. Unified deep supervised domain adaptation and generalization. In Proceedings of the IEEE international conference on computer vision, pp.  5715–5725, 2017.
  31. Domain generalization via invariant feature representation. In International conference on machine learning, pp.  10–18. PMLR, 2013.
  32. Training for the future: A simple gradient interpolation loss to generalize along time. Advances in Neural Information Processing Systems, 34:19198–19209, 2021.
  33. Cdot: Continuous domain adaptation using optimal transport. arXiv preprint arXiv:1909.11448, 2019.
  34. Karl Pearson. Notes on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 58:240–242, 1895.
  35. Efficient domain generalization via common-specific low-rank decomposition. In International Conference on Machine Learning, pp. 7728–7738. PMLR, 2020.
  36. Learning to learn single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12556–12565, 2020.
  37. Autoint: Automatic feature interaction learning via self-attentive neural networks. In Proceedings of the 28th ACM international conference on information and knowledge management, pp.  1161–1170, 2019.
  38. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pp.  7262–7272, 2021.
  39. Equalization loss for long-tailed object recognition. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  11662–11671, 2020.
  40. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp.  23–30. IEEE, 2017.
  41. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp.  969–977, 2018.
  42. Continuously indexed domain adaptation. In Proceedings of the 37th International Conference on Machine Learning, pp.  9898–9907, 2020.
  43. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 2022.
  44. Adversarial fine-grained composition learning for unseen attribute-object recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  3741–3749, 2019.
  45. Adversarial domain adaptation with domain mixup. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp.  6502–6509, 2020.
  46. Improve unsupervised domain adaptation with mixup training. arXiv preprint arXiv:2001.00677, 2020.
  47. Data-centric artificial intelligence: A survey. arXiv preprint arXiv:2303.10158, 2023.
Citations (4)

Summary

We haven't generated a summary for this paper yet.