You are viewing a single comment's thread from:
RE: Neural Networks and TensorFlow - Deep Learning Series [Part 14]
I think I missed the last part (part13), but I remember padding. If I remember correctly the output volume keeps on getting smaller for each application of convolution layer. Larger the size of the stride more volume would be lost at the edges.
I think the idea of padding was to start with more volume so we end up retaining most of the original input information.