TransWikia.com

Loss function in GradientBoostingRegressor

Data Science Asked by user3902660 on December 24, 2020

Sickit Learn GradientBoostingRegressor

I was looking at the Scikit-Learn documentation for GradientBoostingRegressor.

Here it says that we can use ‘ls’ as a loss function which is least squares regression. But i am confused since least squares regression is a method to minimize the SSE loss function.

So shouldn’t they mention SSE here?

2 Answers

It would seem that you are over-interpreting what is essentially just convenience shorthand names for the model arguments, and not formal terminology; here, "‘ls’ refers to least squares regression" should be interpreted as "'ls' is the loss function used in least-squares regression".

Formally you do have a point of course - sse would be a more appropriate naming convention here; discussions about such naming conventions are not uncommon among the community, see for example the thread loss function name consistency in gradient boosting (which BTW was resolved here). And you would be most welcome opening a relevant issue for the convention used here.

Correct answer by desertnaut on December 24, 2020

Note that the algorithm is called Gradient Boostign Regressor.

The idea is that you boost decision trees minimizing the gradient. This gradient is a loss function that can take more forms.

The algorithm aggregates each decision tree in the error of the previously fitted and predicted decision tree. There you have your desired loss function.

This parameter is regarding to that.

Answered by Carlos Mougan on December 24, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP