TransWikia.com

Keras layer weights shape is different compared to other conventions

Data Science Asked on January 4, 2021

I have been looking at the layers.weights output of Keras layers. The shape of the layer weight matrix is listed as (number_of_inputfeatures, dense_layer_neurons).

The first example in docs.

However, all the theoretical courses I saw, as well as in pytorch, layers have weight matrix shape the opposite way where weight matrix shape is given by (dense_layer_neurons, input_features) or (layer_2_neurons, layer_1_neurons)

https://www.coursera.org/lecture/neural-networks-deep-learning/getting-your-matrix-dimensions-right-Rz47X

Why are these two conventions opposite to each other?

Am I missing anything? Can someone please clarify.

Thanks.

One Answer

This Explains it - weight matrix shape is dependent on how you shape the input data

https://medium.com/from-the-scratch/deep-learning-deep-guide-for-all-your-matrix-dimensions-and-calculations-415012de1568

Correct answer by tjt on January 4, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP