<< Chapter < Page Chapter >> Page >
This module describes each of the different types of layers we employed in our convolutional neural network.

Convolutional

Convolutional layers produce output feature maps by convolving an input with each of its kernels, trained to recognize different characteristics. Each kernel is an arrangement of weights into a square filter. The first convolutional layer in our network convolves the input image with a set of 20 5x5 kernels to produce 20 feature maps and the second convolutional layer convolves the input (a set of pooled feature maps) with 40 4x4 kernels to produce higher-level feature maps. Each neuron in our convolutional layers uses ReLU (Rectified Linear Units) as its activation function.

The filters in the convolutional layers were trained to recognize particular features. The first convolutional layer detects features such as edges and total “mass” of the image, while the second convolutional layer detects higher-level features including the intersections of features detected in the first layer. The features that each kernel detects were trained through the learning process, where the weights in the kernels were updated during the SGD algorithm.

2d convolution

2dconv
Each filter in the convolutional layers produces a feature map using 2D convolution as above.

Pooling

Pooling layers produce an output by reducing the size of its input using some function. The output of each convolutional layer in our network is used as the input to a pooling layer. The pooling layers take 2x2 regions of the input and pass the maximum value of each region as its output. In this way, the pooling layer effectively reduces the size of the data being handled in the network while still preserving the important features that were detected in the convolutional layers. Significant activations of neurons are preserved as a result of taking the maximum value in a region.

Pooling

maxpooling
The pooling layer samples each feature map into a smaller map as above.

Fully-connected

Neurons in fully connected layers are connected to every neuron in the previous layer and every neuron in the next layer. Each connection has a corresponding weight and bias associated with it. The last 2 layers in our network are both fully connected layers. The first fully connected layer detects the presence of the higher level features found in the second convolutional layer, using the ReLU activation function. The second fully connected layer is the softmax layer, using the softmax activation function.

Fully connected layers

fully-connected
Example of three fully connected layers.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2015. OpenStax CNX. Jan 04, 2016 Download for free at https://legacy.cnx.org/content/col11950/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2015' conversation and receive update notifications?

Ask