Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CosFace: Large Margin Cosine Loss for Deep Face Recognition (1801.09414v2)

Published 29 Jan 2018 in cs.CV

Abstract: Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by $L_2$ normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.

Citations (2,361)

Summary

  • The paper presents Large Margin Cosine Loss (LMCL) that normalizes both features and weight vectors while incorporating a fixed cosine margin to enhance class discrimination.
  • It rigorously compares LMCL with existing loss functions like softmax and A-Softmax, showing more consistent decision boundaries and improved robustness.
  • Experimental results demonstrate state-of-the-art accuracy on benchmarks such as LFW (99.33%) and YTF (96.1%), underscoring its potential for practical face recognition applications.

Large Margin Cosine Loss for Deep Face Recognition: An Insightful Overview

The paper "CosFace: Large Margin Cosine Loss for Deep Face Recognition" by Hao Wang et al. proposes an innovative technique named Large Margin Cosine Loss (LMCL) to enhance the performance of deep face recognition systems. This work, validated across several benchmarks, demonstrates superior performance compared to existing loss functions, addressing some inherent limitations of traditional approaches.

The central problem in face recognition, comprising tasks like face verification and identification, is the lack of discriminative power in conventional softmax loss used by Convolutional Neural Networks (CNNs). Various loss functions like center loss, large margin softmax loss, and angular softmax loss have been previously investigated to enhance inter-class variance and minimize intra-class variance. LMCL follows this trajectory by reformulating the softmax loss as a cosine-based loss function, incorporating both L2L_2 normalization of features and weight vectors to eliminate radial variations and introducing a cosine margin to further maximize the decision margin in the angular space.

Key Contributions

  1. LMCL Formulation: LMCL normalizes both the features and weight vectors, and incorporates a cosine margin term, mm, to enhance the discrimination between classes. This approach reformulates the traditional softmax loss into a cosine-based function. The decision boundary becomes cos(θ1)m=cos(θ2)\cos(\theta_1) - m = \cos(\theta_2), where θi\theta_i is the angle between the feature vector and the weight vector of class ii.
  2. Comparison with Existing Loss Functions: The paper meticulously compares LMCL with softmax, Normalized Softmax Loss (NSL), and Angular-Softmax (A-Softmax). The authors highlight the shortcomings of the existing methods in terms of decision boundaries and robustness. Whereas NSL lacks robustness without a decision margin, A-Softmax suffers from variable margins due to the non-monotonic nature of the cosine function. LMCL offers a consistent margin across classes, ensuring better discriminative capabilities.
  3. Theoretical Justification: The authors detail the mathematical foundations behind feature normalization, highlighting why it is crucial for producing discriminative features. By normalizing features, the learning process emphasizes angle cosines, ensuring that features from the same class are clustered and those from different classes are distinct on a hyperspherical manifold. The paper also discusses the necessity of a sufficiently large scaling parameter ss for effective training and empirically suggests its lower bound.

The efficacy of LMCL is validated through rigorous experiments on several datasets. The method shows state-of-the-art performance on Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge datasets. For instance, LMCL achieves an accuracy of 99.33% on LFW and 96.1% on YTF, significantly improving over previous methods.

Practical and Theoretical Implications

The practical contributions of this research are substantial. With the proposed LMCL, face recognition systems can achieve higher accuracy rates even in large-scale settings with millions of identities. This is particularly beneficial for applications requiring robust security measures, such as biometric authentication systems, surveillance, and access control.

Theoretically, LMCL presents a profound improvement in understanding feature space representation in deep learning. By focusing on angular rather than radial variations, this work aligns more closely with natural human perception, which often relies on angular relationships. The theoretical groundwork laid for the scaling parameter ss and the cosine margin mm offers a substantial foundation for further exploration and optimization in learning discriminative features.

Future Developments

Future research could explore several avenues based on this work:

  1. Dynamic Margin Adjustment: Implementing a dynamic cosine margin that adapts during training could potentially yield even better discriminative features.
  2. Extension to Other Vision Tasks: Applying the principles of LMCL to other domains, such as object recognition or action recognition, may uncover further improvements in these fields.
  3. Integration with Other Architectures: Investigating the integration of LMCL with emerging neural network architectures beyond CNNs, such as Vision Transformers (ViTs) or Graph Neural Networks (GNNs), could provide insights into its applicability across different model architectures.

In conclusion, the paper "CosFace: Large Margin Cosine Loss for Deep Face Recognition" presents a significant advance in the field of face recognition. The proposed LMCL method is rigorously justified and empirically validated, exhibiting clear advantages over existing methods. This work is likely to inspire further research and development aimed at achieving even greater improvements in pattern recognition and related fields.