Theoretical Computer Science Asked by user27182 on February 27, 2021
The fundamental theorem of statistical learning gives an equivalence between uniform convergence of the empirical risk to learning in the PAC framework.
I have only seen this stated in the case of binary classification with the 0-1 loss.
Does a result of this form hold in more general settings? For example: margin-based classification rules, regression, multi-class classification, …?
Another statement of this question could be: under what circumstances does uniform convergence of the empirical risk imply PAC learning? (I am most interested in this direction of implication.)
Please provide references if you have them.
Turns out the answer is yes and can be found in Part 3 (eg chapter 19) of Neural Network Learning: Theoretical Foundations, by Anthony and Bartlett.
Answered by user27182 on February 27, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP