Data Science Asked by Nyut Nyuka on May 22, 2021
I was thinking if i can use polynomial regression like a weak learners in gradient boosting but i read that decision trees are used for that and i cannot find anything that show me the possibility of other weak learners could be used.
The issue of using any linear model (a polynomial regression is a particular case of a linear model, with polynomial features), is that an ensemble of linear models is still a linear model. So, the family of models to optimize from given by the boosted polynomial regression and the single polynomial regression are the same.
This doesn't happen with trees, as an ensemble of trees cannot be expressed as a tree anymore, so that's why trees are used more in the context of gbms.
Answered by David Masip on May 22, 2021
Any machine learning algorithm can be used as a weak learner for gradient boosting. Certain algorithms will work better and certain algorithms will work worse.
Linear regression tries to minimize the sum of square errors (SSE). The first linear regression model might not leave any errors for subsequent linear regressions models to fit, thus there would be no advantage to gradient boosting in that case.
Answered by Brian Spiering on May 22, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP