0%
March 23, 2022

Resulting Shapes of Conv-net by Direct Experiment

deep-learning

Sample Code for Shape Experimental Calculation

Suppose we have the up-sampling part in part of the U-Net:

  u = UpSampling2D(size=2)(layer_input)
  u = Conv2D(filters, kernel_size=4, strides=1, padding="same", activation="relu")(u)

What is the resulting shape of the output? For strides=1 and padding="same" we can memorize the output shape are always unchanged.

But what if kernel_size = 3 and strides=2, padding="valid"? There is no point to memorize the formula for output shape as we can always experiment it out as follows:

x = tf.random.normal([1, 28,28,3])
x = Conv2D(32, kernel_size=3, strides=2, padding="valid", activation="relu")(x)
print(tf.shape(x))

# output: tf.Tensor([ 1 13 13 32], shape=(4,), dtype=int32)

Rigorous Proof to Formula of Shapes

Let , for and we can prove the following:

This is due to the following simple fact:

Fact. Let and be positive integers, there holds

Proof. We do case by case study. If for some positive , then

When , for some and , then