<< Chapter < Page Chapter >> Page >
This module outlines the advantages of adding convolutional and pooling layers to a standard neural network for applications of image recognition.

The main motivation behind using convolutional layers is that it is typically true of images that pixels in close proximity are more related with each other than with pixels that are a greater distance away. Thus, compared to fully connected layers, convolutional layers give a better indication of general features that appear in an image by taking advantage of this spatial structure of images.

Shift invariance

A major shortcoming of fully interconnected networks is their dependence on position of a feature in an image. Such a network would recognize an image, but not its slightly shifted self. Training shift invariance in a fully connected is network is possible, and involves extensive expansion of training data, but it’s significantly more efficient to use convolution, which naturally has this property. Convolutional layers detect a given feature, regardless of its position on an image. Because the MNIST data set is centered and normalized, a fully connected network can still work, but a network with convolutional layers is able to handle data that is not properly centered or normalized.

Computationally efficient

Another consequence of using convolutional networks is that there are fewer parameters involved, making the network more computationally efficient to train. For any given neuron in a fully connected hidden layer, there is a weight and a bias associated with each neuron in the previous layer, and as such, the number of parameters scale as the number of neurons squared, assuming a similar number of neurons in each interconnected layer. This makes it incredibly difficult and computationally inefficient to implement deep neural networks consisting of only fully connected layers. Convolutional layers by contrast only have 1 bias per kernel and 1 weight for each pixel of each kernel. A neuron in the following layer is only connected to the number of neurons specified by the size of the kernel. Now instead of scaling as n squared, parameters scale as the number of kernels times the size of each kernel.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2015. OpenStax CNX. Jan 04, 2016 Download for free at https://legacy.cnx.org/content/col11950/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2015' conversation and receive update notifications?

Ask