[Notes] Shapes of CNN

4 minute read

Published:

Since the size of input after convolving, padding and max-pooling can be tricky, I believe having a formula sheet that keeps track of the shape after each operation will come in handy. So, here we go.

Convolving Layer

Let the input X be a matrix of shape \((batchsize,n_h,n_w,n_c)\) and F be a filter of shape \((f_h,f_w,f_{cin},f_{cout})\), then the output of \(X\circledast F\) will be of shape \((batchsize,n_h-f_h+1,n_w-f_w+1,f_{cout})\). Here, \(f_{cout}\) can be understood as the number of filters.

Now, consider padding \(p\) and stride \(s\). the shape of \(X\circledast F\) is of shape \((batchsize,n_h+2*p-f_h+1,n_w+2*p-f_w+1,f_{cout})\). Normally, we use padding partly because we do not want the input to shrink too drastically after convolving, and would expect the output to have the exact same heigh and weight as those of the input. Hence, the widely accepted value of \(p_i\) is \(\frac{f_i-1}{n_i}\).

Example 1

Consider the case with no padding and stride 1.

import tensorflow as tf
input=tf.random_normal([1,200,150,3])
filter=tf.random_normal([5,5,3,32]) #the default shape of filter in tensorflow is [f_h,f_w,fc_in,fc_out]
convo=tf.nn.conv2d(input,filter,strides=[1,1,1,1],padding='VALID') #stride =1 and no padding
with tf.Session() as sess:
	print(sess.run(tf.shape(convo)))
#Output is of shape [  1 196 146  32]

Here, 1 and 32 are the batch size and number of filters, respectively. From the formula, \(196=200-5+1\) and \(146=150-5+1\).

Now, assume that we use stride>1, that is, we slide the filter more than one step when convolving it with the input. In that case, the shape of the output is going to be \((batchsize, \lfloor\frac{(n_h+2p-f_h)}{s}+1\rfloor, \lfloor\frac{(n_w+2p-f_w)}{s}+1\rfloor,f_{cout})\).

Example 2

import tensorflow as tf
input=tf.random_normal([1,200,150,3])
filter=tf.random_normal([5,5,3,32])
convo=tf.nn.conv2d(input,filter,strides=[1,4,4,1],padding='VALID') #stride =4 and no padding
with tf.Session() as sess:
	print(sess.run(tf.shape(convo)))
#Output is of shape [1 49 37 32]

Again, 1 and 32 are the batch size and number of filters. In addition, \(49=\lfloor\frac{200-5}{4}+1\rfloor\) and \(37=\lfloor\frac{150-5}{4}+1\rfloor\). Changing the parameter padding to ‘SAME’ will result in an output with the same height and width as those of the input.

Pooling Layer

We use pooling layer to reduce the input size strategically without losing too much information. There are max pooling and average pooling, but the output’s shape of those two operations should be identical. Similar to convolving, the shape of input after pooling layer is going to be \((batchsize, \lfloor\frac{(n_h-f_h)}{s}+1\rfloor, \lfloor\frac{(n_w-f_w)}{s}+1\rfloor,f_{cout})\). Normally, people do not use padding in pooling layer, but there is still an option for doing so in tensorflow. However, to make things easier to remember, the meaning of padding in pooling can be understood quite differently compared to in convolution layer. Let’s take a closer look.

Assume an input of shape \((batchsize,n_h,n_w,n_c)\). We are often interested in applying pooling on \(n_h\) and \(n_w\). In our example, we chose \(n_w\) to be 150, so what would happen if we choose \(f_w\) and stride to be 7? Since 7 does not divide 150, some remaining values of the original matrix will be dropped. Specifically, in this case, pooling layer will be applied 21 times, till column 147 (because 7 divides 147) and the remaining three columns will be dropped completely from the output. The exact same argument can be made for the height.

Example 3

Let use a stride that does not divide \(n_w\) nor \(n_h\).

import tensorflow as tf
input=tf.random_normal([1,200,150,3])
pooling=tf.nn.max_pool(input,ksize=[1,7,7,1],strides=[1,7,7,1],padding='VALID') #stide =1 and no padding
with tf.Session() as sess:
	print(sess.run(tf.shape(pooling)))
#Output is of shape [ 1 28 21  3]

Now, if we use padding in pooling layer, the input will be padded 4 more columns to the right so that the filter can move one more time to finish going through every single column in the input. Again, the same thing can be said about the row (or height).

Example 4

import tensorflow as tf
input=tf.random_normal([1,200,150,3])
pooling=tf.nn.max_pool(input,ksize=[1,7,7,1],strides=[1,7,7,1],padding='SAME') #stide =1 and no padding
with tf.Session() as sess:
	print(sess.run(tf.shape(pooling)))
#Output is of shape [ 1 29 22  3]

This also means that if \(f_i\) divides \(n_i\), both padding choice will result in the same output’s shape. If that is not the case, padding = ‘SAME ‘ results in output of shape \((batchsize, \lceil\frac{(n_h-f_h)}{s}+1\rceil, \lceil\frac{(n_w-f_w)}{s}+1\rceil,f_{cout})\).