TransWikia.com

Calculating error in each layer of neural network

Data Science Asked on July 10, 2021

I am referring Andrew Ng’s course to implement neural network https://www.youtube.com/watch?v=x_Eamf8MHwU&t.
In this course bias is taken single matrix with weights.
I got a error and I am not able to figure in days what error I am doing.
Neural network I’m trying to simulate is 2 node in input layer,3 in hidden and 2 in output layer.
Here is the code

def feed_forward(a):  
    for w in theta:
        a=np.concatenate((one,a))
        a=sigmoid(np.dot(w,a))
        activation.append(a)
    delta[network_length-2]=error(a,y)

#delta is the error in each layer

def back_propagate():
    for l in range(network_length-3,-1,-1):
        delta[l]=np.dot(theta[l+1].T,delta[l+1])*sigmoid_prime(activation[l])

I get error in last line saying

  delta[l]=np.dot(theta[l+1].T,delta[l+1])*sigmoid_prime(activation[l])
ValueError: operands could not be broadcast together with shapes (4,1) (3,1)

  

I don’t know why activation and np.dot(theta[l+1].T,delta[l+1]) dimension not matching

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP