Data Science Asked on January 4, 2021
I have been looking at the layers.weights
output of Keras layers. The shape of the layer weight matrix is listed as (number_of_inputfeatures, dense_layer_neurons)
.
The first example in docs.
However, all the theoretical courses I saw, as well as in pytorch, layers have weight matrix shape the opposite way where weight matrix shape is given by (dense_layer_neurons, input_features)
or (layer_2_neurons, layer_1_neurons)
Why are these two conventions opposite to each other?
Am I missing anything? Can someone please clarify.
Thanks.
This Explains it - weight matrix shape is dependent on how you shape the input data
Correct answer by tjt on January 4, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP