Data Science Asked by BSP on November 19, 2020
Most of the Regularization (L1, L2 ) techniques focused mostly on the weight term only .But Regularization is not considering Bias.From my understanding large bias doesn’t make a neuron sensitive to its inputs in the same way as having large weights and vice versa .which means Bias is also important as weight .
Can any one help me why Bias is not considering?
The point of regularization is to avoid overfitting, and overfitting happens when you have too many predictor variables (i.e. neurons) contributing to the outcome. So, by regularization you are essentially excluding some neurons.
However, bias is an intrinsic property of every neuron. It is not a connection transporting the result of a neuron from the previous layer. Maybe you can imagine all biases in the network being connections to a single, virtual neuron with a constant output of one, but still, as it is a constant, it is not a result of a computation and does not depend on the input.
Answered by Igor F. on November 19, 2020
Imagine you have a bunch of points that lie roughly on the line y=x. Although you could find a polynomial that passes through each and every single point you could argue that the line y=x is a better approximator because it doesn't fit to the noise of each point.
That is the point of regularization. When you have a network with smaller weights a few small differences don't make much of an impact therefore you make it harder to learn noise in your data. The difference with biases is that they don't affect a neuron in the same way as a weight. The bias acts as more like a threshold value, where after collecting the result from the weights and previous activations you see if it matters enough to make the neuron become positive or not. For this reason, you only focus on weights in regularization. Hope this helps!
Answered by BOSSrobot on November 19, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP