Data Science Asked on May 10, 2021
Machine Learning books generally explains that the error calculated for a given sample $i$ is:
$e_i = y_i – hat{y_i}$
Where $hat{y}$ is the target output and $y$ is the actual output given by the network. So, a loss function $L$ is calculated:
$L = frac{1}{2N}sum^{N}_{i=1}(e_i)^2$
The above scenario is explained for a binary classification/regression problem. Now, let’s assume a MLP network with $m$ neurons in the output layer for a multiclass classification problem (generally one neuron per class).
What does change in the equations above? Since we now have multiple outputs, both $e_i$ and $y_i$ should be a vector?
You are mixing various concepts:
Answered by Mikedev on May 10, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP