Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do Deep Nets Really Need to be Deep? (1312.6184v7)

Published 21 Dec 2013 in cs.LG and cs.NE

Abstract: Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this extended abstract, we show that shallow feed-forward networks can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow neural nets can learn these deep functions using a total number of parameters similar to the original deep model. We evaluate our method on the TIMIT phoneme recognition task and are able to train shallow fully-connected nets that perform similarly to complex, well-engineered, deep convolutional architectures. Our success in training shallow neural nets to mimic deeper models suggests that there probably exist better algorithms for training shallow feed-forward nets than those currently available.

Citations (2,058)

Summary

  • The paper's main contribution is demonstrating that model compression enables shallow networks to mimic deep nets with competitive accuracy.
  • Empirical results on TIMIT and CIFAR-10 show that shallow models can achieve error rates close to those of deep network ensembles.
  • The study challenges conventional deep learning views, suggesting that optimized training can unlock the full potential of shallow architectures.

Do Deep Nets Really Need to be Deep?

Overview

The paper "Do Deep Nets Really Need to be Deep?" by Lei Jimmy Ba and Rich Caruana investigates whether shallow feed-forward neural networks can achieve the same performance as deep neural networks on complex tasks such as speech recognition and image classification. The authors present empirical evidence that shallow networks can indeed approximate the performance of deep networks by employing a method known as model compression. This method involves training a shallow network to mimic the behavior of a pre-trained deep network.

Key Contributions

The core contributions of this paper are:

  1. Model Compression: The authors demonstrate that a shallow network can be trained to mimic a deep network using a model compression technique. This technique allows a shallow net to approximate the function learned by a deep network by using unlabeled data passed through the deep network to train the shallow network on the intermediate model outputs (logits).
  2. Empirical Results on TIMIT and CIFAR-10: The experiments conducted on TIMIT phoneme recognition and CIFAR-10 image classification tasks reveal that shallow networks trained to mimic deep networks can achieve comparable accuracy.
  3. Implications of Shallow Networks: The paper challenges the necessity of depth in neural networks for achieving high accuracy, suggesting that with better training techniques, shallow networks may be just as capable.

Experimental Details

Training Shallow Nets to Mimic Deep Nets

The shallow networks were trained using data labeled by deep networks:

  • On TIMIT, the shallow nets were trained to mimic an ensemble of deep convolutional neural networks (ECNN).
  • On CIFAR-10, the networks were initially trained on convolutional representations to handle the inherent complexity of image data.

Logit Regression

The mimic networks were trained to regress on the log probabilities (logits) output by the deep networks. This method mitigates the often challenging direct classification task and helps the shallow network learn the intricate decision boundaries by focusing on the inter-class relationships captured by the deep network.

Numerical Results

  1. TIMIT Phoneme Recognition:
    • Shallow models, despite having up to 10 times more parameters, achieved phone error rates (PER) competitive with deep models. Specifically, a shallow net with 400,000 hidden units (SNN-MIMIC-400k) had a PER similar to a CNN (20.0% vs. 19.5%).
  2. CIFAR-10 Image Recognition:
    • Shallow models trained with convolutional features (SNN-CNN-MIMIC-30k) achieved classification error rates comparable to those of deep CNN models, showing a best result of 14.2% error, which is quite close to the deep ensemble model’s error of 11.0%.

Discussion and Implications

The results suggest several critical insights:

  • Training Efficacy: The model compression approach leads to similar performance metrics using shallow networks, highlighting potential deficiencies in current training methodologies for shallow architectures.
  • Capacity and Representational Power: The shallow nets did not exhibit a reduction in capacity or expressiveness when appropriately trained, implying that the depth may not be as crucial if better learning algorithms are developed.
  • Computational Efficiency: Shallow models might offer computational advantages in training time and inference cycles, presenting an appealing trade-off for real-time applications.

Future Directions

The paper opens avenues for developing more refined training algorithms tailored for shallow networks. It also suggests exploring the usage of larger quantities of unlabeled data to enhance the model compression process. The authors propose further investigations into training shallow models directly from original data, bypassing the intermediate deep network training phase.

Conclusion

Ba and Caruana's paper provides a compelling argument that shallow networks, when trained using sophisticated techniques such as model compression, can match the performance of traditionally deeper models on complex tasks. This raises questions about the essentiality of depth in deep learning, suggesting potential for improving shallow models’ training methods to unlock their full capabilities. The findings have significant implications for the design and deployment of neural networks, particularly in contexts requiring reduced computational load and faster training times.