Data Science Asked by AUser240 on February 6, 2021
In a linear model, regularization decreases the slope. Do we just assume that fitting a lin model on training data overfits by almost always creating a slope which is higher than it would be with infinite observations instead? What is the intuition?
Regularization is used to help smooth multi-dimensional models. Take this example,
y = x_1 + eps*(x_2 + ... + x_100)
Let's say eps is very small. It doesnt seem very useful to store those 99 coefficients, isn't it? How do we manage to fit a model in such a way that we drop negligible coefficients? This is exactly what L1-regularisation does!
Each other type of regularisation has another geometric intuition.
Answered by Benoit Descamps on February 6, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP