- The paper introduces CRC by generalizing sparse representation methods to emphasize collaborative coding over strict sparsity constraints.
- It shows that employing L1 and L2 norm regularizations effectively balances robustness against occlusions and computational simplicity.
- Experimental results on Extended Yale B, AR, Multi-PIE, and LFW confirm CRC’s scalability and reliable performance in real-time face recognition.
Collaborative Representation based Classification for Face Recognition
This paper presents an in-depth exploration of Collaborative Representation based Classification (CRC) as applied to the domain of face recognition. The research builds on the Sparse Representation based Classification (SRC) paradigm, which codes a query sample as a sparse linear combination of all training samples and classifies it by evaluating the class with minimal coding residual. However, the authors argue that the collaborative mechanism underpinning SRC is more fundamental than the sparsity constraint traditionally emphasized.
Overview of Key Concepts
The core innovation of this work is the generalization of SRC into CRC. The basic premise of CRC is the representation of a query face image collaboratively across the entire set of training samples, rather than the sparse representation over individual class-aligned samples. This approach mitigates the small-sample-size problem commonly encountered in face recognition by leveraging the entire dataset.
The CRC model allows for multiple norm regularizations on both the coding residuals and coding coefficients, resulting in different instantiations of the classification mechanism. The paper highlights the implications of using L1 or L2 norms:
- L1-norm: Provides robustness to outlier pixels, effectively managing occlusions in facial recognition.
- L2-norm: Simplifies computational complexity while maintaining high levels of accuracy for unoccluded images.
Experimental Evaluation
The paper offers thorough experimental analysis on benchmark datasets such as Extended Yale B, AR, Multi-PIE, and LFW. The results indicate CRC exhibiting comparable performance to SRC in terms of accuracy but with reduced computational demands:
- On datasets with sufficient training samples per class, CRC succeeds without needing the sparse regularization critical to SRC.
- The CRC-RLS (Regularized Least Square) method is shown to deliver robust classification results, significantly outperforming traditional methods and demonstrating efficiency in large-scale scenarios.
Importantly, the authors argue that the role of sparseness is secondary to the collaborative nature of the representation. Specifically, when the dimensionality of the face feature is high, naturally occurring sparsity is adequate without additional computational overhead from the L1-norm regularization.
Implications and Future Directions
The implications of this research are both practical and theoretical:
- Practical: The CRC model provides a scalable solution for face recognition, ideal for real-time applications and scenarios with large databases.
- Theoretical: The work challenges existing paradigms emphasizing sparsity, suggesting a shift in focus towards collaboration among training samples to enhance discriminative power.
Looking forward, the exploration of collaborative representation could be extended beyond face recognition into other domains of pattern recognition. Future investigations could focus on refining the CRC framework, including hybrid approaches that balance sparsity and collaboration, potentially enhancing performance in diverse recognition tasks under varied constraints.
Overall, the paper advances our understanding of representation-based methods in facial recognition, promoting a fundamentally collaborative approach that questions and redefines the strategic importance of sparsity.