Cross Validated Asked by thecity2 on January 26, 2021
I’m building a Bayesian Neural Network, and am trying to understand how to calibrate the uncertainty estimates. From a paper by Seedat and Kanan (https://chriskanan.com/wp-content/uploads/seedat2019.pdf) they define Expected Calibration Error (ECE) as follows:
$$
ECE=mathbb{E}[|mathbb{P}(hat{Y}=Y|hat{P}=p)-p|]
=sum_{m=1}^mfrac{|B_m|}{n}|acc(B_m)-conf(B_m)|
$$
where $m$=number of samples in bin, $M$=number of bins, $acc$=average accuracy for bin $B_m$, and $conf$ = confidence for bin $B_m$.
I understand everything here except what exactly the measure $conf$ actually is. Does it involve the confidence intervals for the predictions?
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP