Artificial Intelligence Asked by Ali KHalili on September 27, 2020
Why L2 loss is more commonly used in Neural Networks than other loss functions?
What is the reason to L2 being a default choice in Neural Networks?
I'll cover both L2 regularized loss, as well as Mean-Squared Error (MSE):
MSE:
L2 Regularization:
Using L2 regularization is equivalent to invoking a Gaussian prior (see https://stats.stackexchange.com/questions/163388/why-is-the-l2-regularization-equivalent-to-gaussian-prior) on your model/estimator. If modeling your problem as a Maximum A Posteriori Inference (MAP) problem, if your likelihood model (p(y|x)) is Gaussian, then your posterior distribution over parameters (p(x|y)) will also be Gaussian. From Wikipedia: "If the likelihood function is Gaussian, choosing a Gaussian prior over the mean will ensure that the posterior distribution is also Gaussian" (source: https://en.wikipedia.org/wiki/Conjugate_prior).
As in the case above, L2 loss is continuously-differentiable across any domain, unlike L1 loss.
Correct answer by Ryan Sander on September 27, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP