- The paper presents a novel hybrid model combining CNNs and LSTMs to extract spatial and temporal gait features from smartphone inertial data.
- The method achieves over 93.5% identification and 93.7% authentication accuracy across two datasets with 118 subjects under unconstrained conditions.
- The study highlights the feasibility of using everyday mobile devices for unobtrusive and resilient biometric authentication in dynamic environments.
Deep Learning-Based Gait Recognition Using Smartphones in the Wild
This paper presents a paper on gait recognition via deep learning using smartphone-integrated inertial sensors under unconstrained conditions, referred to as "in the wild." Gait, as a biometric, offers advantages such as unobtrusiveness and resilience against concealment. Smartphones, which are widely accessible, contain such sensors, making them a practical medium for gait data acquisition.
The proposed method utilizes a hybrid deep-learning architecture combining Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to effectively extract and model gait features from freely-collected inertial data. This approach is distinct from traditional methods, which often depend on structured and controlled data collection environments.
Key Insights and Results:
- Novel Architecture: A novel hybrid neural network combining CNNs with one-dimensional kernels and Long Short-Term Memory (LSTM) networks is introduced. This architecture efficiently extracts spatial and temporal features from six-dimensional time-series data (comprising accelerometer and gyroscope readings along x, y, and z axes).
- Data Handling and Experimentation: The paper evaluates the method on two datasets involving 118 subjects. The experiments focus on two core tasks: person identification and authentication via gait recognition. Results show identification and authentication accuracies exceeding 93.5% and 93.7%, respectively, demonstrating the method's effectiveness.
- Data Collection without Constraints: Unlike controlled environments, this method assumes the smartphone's inertial data capture occurs without restrictions regarding user location, time, or walking conditions. Fully Convolutional Neural Networks (FCNN) aid in segmenting the raw data into walking and non-walking sessions.
Implications and Future Directions:
The paper confirms the effectiveness of deep learning in processing unconstrained gait data, signifying an advancement in both theoretical understanding and practical applications of gait biometrics in everyday environments. The ability to unobtrusively and accurately authenticate individuals using ubiquitous mobile hardware presents significant potential for enhancing personal security applications, health monitoring, and secure access control systems.
Future research could expand on the presented architecture to incorporate multi-source data integration, adapt to varying gait patterns due to physical condition changes, or scale across larger, more diverse populations. Further exploration into real-time processing capabilities on mobile devices could bridge the gap between research environments and practical, everyday usability of gait-based biometric systems.