Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolving Deep Convolutional Neural Networks for Image Classification (1710.10741v3)

Published 30 Oct 2017 in cs.NE and cs.CV

Abstract: Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the state-of-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the state-of-the-art algorithms in terms of classification error rate and the number of parameters (weights).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yanan Sun (76 papers)
  2. Bing Xue (70 papers)
  3. Mengjie Zhang (80 papers)
  4. Gary G. Yen (30 papers)
Citations (547)

Summary

  • The paper introduces a novel GA-based method that evolves CNN architectures and weight initialization to enhance image classification accuracy.
  • The study presents a flexible, variable-length gene encoding strategy that optimizes network depth and minimizes computational overhead.
  • Experimental evaluations demonstrate significantly lower error rates on benchmarks like Fashion compared to advanced architectures such as GoogleNet and VGG16.

Evolving Deep Convolutional Neural Networks for Image Classification

The paper presents a novel approach for evolving Convolutional Neural Networks (CNNs) by utilizing genetic algorithms (GAs) to optimize their architectures and connection weight initialization. This methodology seeks to address the challenges posed by the complexity and scale of modern CNN architectures in the domain of image classification.

Key Contributions

The authors introduce an innovative variable-length gene encoding strategy that represents different building blocks and optimizes the depth of CNNs. This encoding allows for flexible exploration of architectural variations without predefined constraints, potentially uncovering more optimal structures. Additionally, the paper offers a new representation scheme for initializing connection weights, which helps avoid the local minima problem often encountered in gradient-based optimization methods.

The proposed framework evaluates the fitness of CNN architectures using a novel method that significantly reduces computational demands. This method minimizes the need for extensive computational resources usually required for GA-based optimization of CNNs.

Experimental Evaluation

The algorithm's effectiveness was tested against 22 existing methods across nine benchmark image classification tasks. Results demonstrated a notable improvement in classification error rates and efficiency in terms of the number of parameters compared to state-of-the-art methods. For instance, on the Fashion dataset, the proposed method achieved a 5.47% classification error, outperforming advanced architectures like GoogleNet and VGG16, which reported 6.3% and 6.5%, respectively.

Implications and Future Directions

The paper underscores the potential of evolutionary computation in optimizing deep learning models, offering a viable pathway to automation in neural architecture search. By reducing dependency on domain knowledge and computational resources, it opens doors for wider accessibility to CNN optimization.

Practically, this approach holds promise for applications in resource-constrained environments, such as mobile and embedded systems, by evolving lightweight models. Theoretically, it challenges traditional methods by demonstrating that global optimization via GAs can be effectively adapted for CNNs, even with their vast parameter spaces.

Future research could explore scaling this approach for larger datasets and different types of neural architectures, such as Recurrent Neural Networks (RNNs). Further development of efficient fitness evaluation techniques will also be crucial for handling the computational demands of large-scale applications.

In conclusion, this work presents a significant step in the integration of evolutionary algorithms and deep learning, providing valuable insights and tools for advancing neural architecture design.

Youtube Logo Streamline Icon: https://streamlinehq.com