Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The UCR Time Series Archive (1810.07758v2)

Published 17 Oct 2018 in cs.LG and stat.ML

Abstract: The UCR Time Series Archive - introduced in 2002, has become an important resource in the time series data mining community, with at least one thousand published papers making use of at least one data set from the archive. The original incarnation of the archive had sixteen data sets but since that time, it has gone through periodic expansions. The last expansion took place in the summer of 2015 when the archive grew from 45 to 85 data sets. This paper introduces and will focus on the new data expansion from 85 to 128 data sets. Beyond expanding this valuable resource, this paper offers pragmatic advice to anyone who may wish to evaluate a new algorithm on the archive. Finally, this paper makes a novel and yet actionable claim: of the hundreds of papers that show an improvement over the standard baseline (1-nearest neighbor classification), a large fraction may be mis-attributing the reasons for their improvement. Moreover, they may have been able to achieve the same improvement with a much simpler modification, requiring just a single line of code.

Citations (733)

Summary

  • The paper expands the archive from 85 to 128 datasets, significantly enhancing benchmarking for time series classification research.
  • The paper reveals that simple modifications, such as adjusting DTW warping windows, can achieve performance gains comparable to complex models.
  • The paper outlines best practices in data splitting, evaluation, and statistical testing to ensure reproducible and reliable research outcomes.

Overview of the UCR Time Series Archive Expansion

The paper "The UCR Time Series Archive" provides a detailed account of the expansion and evolution of the UCR Time Series Archive, an invaluable resource in the domain of time series data mining. This archive, originating with 16 data sets and expanding to 128 in its latest update, serves as a foundational benchmark for research, with usage in approximately one thousand academic papers.

Key Contributions

The archive's most recent expansion, from 85 to 128 data sets, addresses several gaps and criticisms identified by the research community. The authors have integrated new data sets to reflect modern needs, including longer and variable-length sequences, multivariate series, and enhanced metadata about data provenance.

Novel Insights and Claims

A notable discussion within the paper concerns the potential overestimation of algorithm improvements over the 1-NN baseline. The authors suggest that many published enhancements could have been achieved with simpler modifications, like adjusting the warping window in DTW. This proposition is substantiated by claims that modifications as trivial as code tweaks may suffice for improvement, challenging the perceived complexity of some time series methodologies.

Methodological Best Practices

The authors offer comprehensive guidance on utilizing the UCR Archive effectively for classification experiments:

  • Data Splitting and Evaluation: The emphasis on using the existing train/test splits ensures reproducibility, serving as a baseline before any further empirical analysis. Researchers are encouraged to perform KK-fold cross-validation for comprehensive evaluation.
  • Significance Testing: Proper statistical testing methodologies are advocated, recommending the use of Wilcoxon signed-rank tests and avoiding cherry picking scenarios to ensure robust comparison across different algorithms.
  • Mis-attribution Concerns: Illustrated through examples, the paper cautions against attributing performance gains to complex models without rigorous ablation studies to verify the true source of improvement.

Criticisms Addressed

The paper acknowledges several criticisms regarding prior iterations of the archive, such as unrealistic assumptions about data, the need for clearer data provenance, and the small size of some data sets. Each criticism has been methodically addressed through the latest expansion and documentation improvements.

Implications and Future Directions

The archive's expansion paves the way for more sophisticated benchmarking and algorithmic evaluation in time series analysis. It facilitates testing on a broader diversity of problems, including those with high variability in sequence length and complex multivariate interactions.

Looking forward, continued collection of real-world data sets, adherence to open access principles, and community engagement are essential for further enhancing the archive's utility. Moreover, fostering research that precisely attributes algorithmic improvements could drive more meaningful advances in the field.

Conclusion

The UCR Time Series Archive remains a cornerstone for time series classification research. The thoughtful expansion and the accompanying advice on best practices set a strong precedent for future developments, ensuring the archive's relevance and applicability in advancing data mining research.