Data Science Asked on April 5, 2021
For neural networks we have the universal approximation theorem which states that neural networks can approximate any continuous function on a compact subset of $R^n$.
Is there a similar result for gradient boosted trees? It seems reasonable since you can keep adding more branches, but I cannot find any formal discussion of the subject.
EDIT: My question seems very similar to
Can regression trees predict continuously?, though maybe not asking exactly the same thing. But see that question for relevant discussion.
Yes - create a region for each data point (i.e., memorize the training data).
Thus it is possible for gradient boosted trees fit any training data, but it would have limited generalization to new data.
Answered by Brian Spiering on April 5, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP