TransWikia.com

Neural Network not learning when more than 1 training data is given

Data Science Asked by Broseph_Stally on April 20, 2021

I am very new to neural networks and data science in general and wanted to try getting my hand in making a simple neural network in python.

I tried to make a neural network from scratch hoping to start off with one that can learn to draw a regression line.

Here is my code, its pretty messy.

import numpy as np
import matplotlib.pyplot as plt
import random

class Layer: #format is input multiplied weights
    def __init__(self,n_inputs,n_neurons):
        self.n_inputs = n_inputs
        self.n_neurons = n_neurons
        self.weights = np.random.randn(n_inputs,n_neurons)
        self.biases = np.zeros((1,n_neurons))


    def forward(self, inputs):
        self.prevX = np.array(inputs)
        self.output = np.dot(inputs,self.weights)+self.biases
        self.ReLU = np.maximum(0,self.output)

    def Correction(self,target,Lrate,ReLUTrue = True):
        if ReLUTrue:
            output = self.ReLU.copy()
        else:
            output = self.output.copy()
        self.biases += 2*(target-output)*Lrate
        dWeights = np.dot(self.prevX.T,2*(target-output))
        dX = np.zeros((1,self.n_inputs))
        for j in range(self.n_neurons):
            dX += 2*(target[0][j]-output[0][j])*self.weights.T[j]
        self.targetprevX = self.prevX + dX*Lrate/self.n_neurons
        self.weights += dWeights*Lrate

class Network:
    def __init__(self):
        self.layers=[]

    def appendLayer(self,layerObject):
        self.layers.append(layerObject)

    def forward(self,inputs):
        for i in range(0,len(self.layers)-1):
            self.layers[i].forward(inputs)
            inputs = self.layers[i].ReLU.copy()
        self.layers[-1].forward(inputs)

    def backProp(self,target,Lrate):
        for i in range(-1,-len(self.layers)-1,-1):
            self.layers[i].Correction(target,Lrate)
            target = self.layers[i].targetprevX.copy()

    def UpdateLayers(self):
        for layer in self.layers:
            layer.Update()

    def Output(self):
        return self.layers[-1].output

neunetwork = Network()
neunetwork.appendLayer(Layer(1,8))
neunetwork.appendLayer(Layer(8,8))
neunetwork.appendLayer(Layer(8,1))

for iter in range(0,100):
    X = random.uniform(-5,5)
    Y = X**2
    neunetwork.forward([[X]])
    print("error:",neunetwork.Output()-Y)
    neunetwork.backProp([[Y]],0.001)

plotXs=[]
plotYs=[]
for points in range(0,100):
    X=random.uniform(-5,5)
    neunetwork.forward([[X]])
    Y=neunetwork.Output()[0][0]
    plotXs.append(X)
    plotYs.append(Y)
plt.plot(plotXs,plotYs,'ok')
plt.show()

The programme converges to the correct value when a single constant value is parsed but never converges when multiple different values are parsed. My understanding of neural networks is very limited and am just doing this as a little project for my own learning so pardon me for any incorrect use of terms.

Thank you for your time.

One Answer

The input has the following shape:

X = random.uniform(-5,5)
neunetwork.forward([[X]])

print(np.array([[X]]).shape) # == (1,1)

You could use numpy to generate X and y:

X = random.uniform(-5,5,size=(10,1)
Y = X**2

It would help to get a copy of the error message to make sure this is the problem.

The forward/backprop looks right in terms of accumulating gradients. I'm not sure about the following piece of code though:

dX = np.zeros((1,self.n_inputs))
for j in range(self.n_neurons):
        dX += 2*(target[0][j]-output[0][j])*self.weights.T[j]
self.targetprevX = self.prevX + dX*Lrate/self.n_neurons

I think you can utilize a dot product here.

Answered by FreedomToWin on April 20, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP