Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 97 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 35 tok/s
GPT-5 High 38 tok/s Pro
GPT-4o 102 tok/s
GPT OSS 120B 461 tok/s Pro
Kimi K2 228 tok/s Pro
2000 character limit reached

Discovery of Shifting Patterns in Sequence Classification (1712.07203v1)

Published 19 Dec 2017 in cs.LG and stat.ML

Abstract: In this paper, we investigate the multi-variate sequence classification problem from a multi-instance learning perspective. Real-world sequential data commonly show discriminative patterns only at specific time periods. For instance, we can identify a cropland during its growing season, but it looks similar to a barren land after harvest or before planting. Besides, even within the same class, the discriminative patterns can appear in different periods of sequential data. Due to such property, these discriminative patterns are also referred to as shifting patterns. The shifting patterns in sequential data severely degrade the performance of traditional classification methods without sufficient training data. We propose a novel sequence classification method by automatically mining shifting patterns from multi-variate sequence. The method employs a multi-instance learning approach to detect shifting patterns while also modeling temporal relationships within each multi-instance bag by an LSTM model to further improve the classification performance. We extensively evaluate our method on two real-world applications - cropland mapping and affective state recognition. The experiments demonstrate the superiority of our proposed method in sequence classification performance and in detecting discriminative shifting patterns.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.