Data Science Asked by Mossmyr on September 5, 2021
In classification tasks, we can interpret the output vector as how "confident" the model is that the input has a certain label. For example,
y = [0.01 0.20 0.99 0.10]
would mean the model is 99% certain the input has label with index 2 and 1% certain it has label with index 0.
My question is: Is there an equivalent for regression tasks?
If my output looks like
y = 0.33
is there a way to measure how "certain" the model is that the output really is 0.33?
There are many approaches to evaluate the uncertainty of your regression algorithms.
There are model dependent approaches as which the bayesian approach allows you to have an predictive distribution instead of an output which allows you to evaluate your confidence interval. You can apply this approach on linear models as well as on DNN.
There are also model free approach as bootstrapping or quantile regression that allows you to evaluate your uncertainty while considering the prediction algorithm as a black-box model
Correct answer by mirimo on September 5, 2021
For regression, it is more common to look at confidence intervals. You select a certainty level or confidence interval/band and the lower and upper bounds are calculated from those values.
references: https://stackoverflow.com/questions/27116479/calculate-confidence-band-of-least-square-fit
Answered by Donald S on September 5, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP