Data Science Asked by Mateusz Konopelski on November 24, 2020
Famous LeNet-5 architecture looks like this:
The output of layer S2 has dimension: 10x10x6 – so basically an image with 6 convultions applied to it to derive features.
If each dimension was again submitted to 6 filters the resulting output would be of 10x10x36 however it is 10x10x16. Initially I stumble on it but finnaly I udnerstood that this is done be combining inputs from layer S2 and applying one kernel on it as it’s explained in the article:
Layer C3 is a convolutional layer with 16 feature maps Each unit in
each feature map is connected to several 5×5 neighborhoods at
identical locations in a subset of S2s feature maps
.
The rationale behind the connection scheme in table I is the
following The 1rst six C3 feature maps take inputs from every
contiguous subsets of three feature maps in S2. The next six take input
from every contiguous subset of four. The next three take input from
some discontinuous subsets of four Finally the last one takes input
from all S2 feature maps Layer C3 has 1,516 trainable parameters and
151,600 connections
and the roadmap of it is provided in the table:
What I am still not uderstand is how exactly should I combine them?
In previous layer I’ve just applied 6 kernels on 1 dimension, resulting in 6 dimensions what was understandable. Here I am a bit lost to be honest 🙁
Please help.
Mateusz
See this table, each row of which has 16 numbers. I don't know how to align them, so I posted a picture as well.
FILTER 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
WEIGHTS 75 75 75 75 75 75 100 100 100 100 100 100 100 100 100 150
BIAS 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
TRAINABLE PARAMETERS 76 76 76 76 76 76 101 101 101 101 101 101 101 101 101 151
Answered by xiaokaoy on November 24, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP