Neural Networks and TensorFlow - Deep Learning Series [Part 14]
Even though we've previously discussed about activations, in this lesson we're going to look into it specifically for convolutional neural networks in TensorFlow.
So, in brief, in this short lesson:
- we discuss the types of activations commonly used for CNNs in TensorFlow
- pooling layers and how to implement them
- dropout, what it is used for, and how to implement it.
If you've been watching this series, thank you! And I hope you learn something from it, at least as much as I want to convey knowledge about this stuff.
This piece as always being interesting ever since e I joined the race of following you, I have always find it valuable, I really enjoy reading this convolutional neutral network, it widing my knowledge on science related issue, the research work of your is awesome, keep the good work flowing man.
I Learn quite a lot from your post, no knowledge is lost, no information is useless. What you offer on your blog are what some pay for to learn as private courses and in seminars. If I have any questions I will definitely ask you
I think I missed the last part (part13), but I remember padding. If I remember correctly the output volume keeps on getting smaller for each application of convolution layer. Larger the size of the stride more volume would be lost at the edges.
I think the idea of padding was to start with more volume so we end up retaining most of the original input information.