Data Science Asked by Vasco Ferreira on May 4, 2021
I’m writing an article about business management of wine companies where I use a Multi-Layer Perceptron Network.
My teacher then asked me to write an equation that lets me calculate the output of the network. My answer was that due to the nature of multi-layer perceptron networks there is no single equation per se. What I have is a table of weights and bias. I can then use this formula:
$$f(x) = (sum^{m}_{i=1} w_i * x_i) + b$$
Where:
m
is the number of neurons in the previous layer,w
is a random weight,x
is the input value,b
is a random bias.Doing this for each layer/neuron in the hidden layers and the output layer.
She showed me an example of another work she made (image on the bottom), telling me that it should be something like that. Looking at the chart, I suppose that it is a logistic regression.
So, my questions are the following:
Edit 1: I didn’t wrote the formula but I do also have activation functions (relu).
You are forgetting one element of the MLP which is the activation function. If your activation function is linear - then you can simply flatten out all the neurons into one single linear equation. The advantage of MLP however is its non-linearities so I suspect in your network you do have some activation (sigmoid? tanh? relu? etc..).
As for your graph - you could simply output predictions from your MLP and plot the exact scatter plot you have above. The only difference would be you wouldn't have a simple way of expressing this network in algebraic notation (as you have done on the existing x-axis).
To describe networks effectively in text you should look into matrix notation describing the weights and inputs of each layer. Maybe take a look at something like this to get started: https://www.jeremyjordan.me/intro-to-neural-networks/
Correct answer by Oliver Foster on May 4, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP