Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OFF-ApexNet on Micro-expression Recognition System (1805.08699v1)

Published 10 May 2018 in cs.CV and cs.LG

Abstract: When a person attempts to conceal an emotion, the genuine emotion is manifest as a micro-expression. Exploration of automatic facial micro-expression recognition systems is relatively new in the computer vision domain. This is due to the difficulty in implementing optimal feature extraction methods to cope with the subtlety and brief motion characteristics of the expression. Most of the existing approaches extract the subtle facial movements based on hand-crafted features. In this paper, we address the micro-expression recognition task with a convolutional neural network (CNN) architecture, which well integrates the features extracted from each video. A new feature descriptor, Optical Flow Features from Apex frame Network (OFF-ApexNet) is introduced. This feature descriptor combines the optical ow guided context with the CNN. Firstly, we obtain the location of the apex frame from each video sequence as it portrays the highest intensity of facial motion among all frames. Then, the optical ow information are attained from the apex frame and a reference frame (i.e., onset frame). Finally, the optical flow features are fed into a pre-designed CNN model for further feature enhancement as well as to carry out the expression classification. To evaluate the effectiveness of OFF-ApexNet, comprehensive evaluations are conducted on three public spontaneous micro-expression datasets (i.e., SMIC, CASME II and SAMM). The promising recognition result suggests that the proposed method can optimally describe the significant micro-expression details. In particular, we report that, in a multi-database with leave-one-subject-out cross-validation experimental protocol, the recognition performance reaches 74.60% of recognition accuracy and F-measure of 71.04%. We also note that this is the first work that performs cross-dataset validation on three databases in this domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sze-Teng Liong (9 papers)
  2. Y. S. Gan (7 papers)
  3. Wei-Chuen Yau (2 papers)
  4. Yen-Chang Huang (15 papers)
  5. Tan Lit Ken (1 paper)
Citations (207)

Summary

Analysis of "OFF-ApexNet on Micro-expression Recognition System"

The paper introduces a novel approach to micro-expression recognition using a convolutional neural network architecture named OFF-ApexNet. This system addresses the challenging task of identifying genuine emotions expressed as micro-expressions, which are characterized by subtle and brief facial muscle movements. The paper presents a compelling method that harnesses the advantages of both optical flow features and CNNs, evaluated over three established spontaneous micro-expression datasets: SMIC, CASME II, and SAMM.

Technical Approach and Methodology

The key contribution of this paper is the development of a feature extractor dubbed Optical Flow Features from Apex frame Network (OFF-ApexNet). This extractor is distinctive in that it combines optical flow features with a convolutional neural network for micro-expression classification. The paper defines a comprehensive preprocessing strategy to ensure the effective recognition of micro-expressions:

  1. Apex Frame Selection: The apex frame, which is crucial for capturing the highest intensity of an expression, is automatically identified using the Divide and Conquer strategy on selected frame transitions.
  2. Feature Extraction: The fundamental innovation lies in utilizing optical flow components — specifically the horizontal and vertical flows between onset and apex frames — effectively capturing the motion dynamics of expressions.
  3. CNN Architecture: These optical flow features are processed using a two-path CNN, where each path is trained with either the horizontal or vertical flows, and the features are combined at the fully connected layers.

The novelty of OFF-ApexNet is in its integration of two distinct feature processing methodologies. This dual-path CNN architecture enhances feature learning capabilities, automatically discerning relevant spatio-temporal patterns that are pivotal for expression classification.

Evaluation and Results

The paper rigorously evaluates the proposed system using cross-dataset validation, a methodological strength that addresses generalization concerns. The experiments demonstrate a robust recognition performance, achieving up to 74.60% accuracy and a F-measure of 71.04%. The use of a leave-one-subject-out cross-validation ensures a comprehensive assessment of the model's efficacy across different individuals and conditions.

Implications and Future Work

This research carries significant implications for the field of affective computing, particularly in enhancing the reliability and accuracy of emotion detection systems. The approach balances handcrafted feature extraction and deep learning, providing a viable blueprint for scalable micro-expression recognition technologies.

Future research can expand on this work by exploring alternative feature extraction mechanisms to capture more nuanced facial dynamics. Additionally, addressing the imbalance in dataset class distributions could lead to enhanced model robustness across diverse expressions. Moreover, integrating this architecture with low-framerate video sources could further extend its applicability in real-world scenarios, such as security and psychological diagnosis.

In conclusion, this paper offers a sophisticated take on micro-expression recognition, contributing a hybrid feature extraction and classification paradigm that stands to influence subsequent advances in automated emotion recognition systems. The innovative combination of optical flow features with CNN architectures underscores its potential impact on the development of more intuitive human-computer interaction technologies.