arcxtriy OP t1_j24yx4p wrote on December 29, 2022 at 6:21 PM Reply to comment by HateRedditCantQuitit in [D] SOTA Multiclass Model Calibration by arcxtriy If p(dog)=p(cat)=0.5 then it's fine, because it tells me the classified is uncertain. Isn't it? Permalink Parent 1
arcxtriy OP t1_j24b90c wrote on December 29, 2022 at 3:48 PM Reply to comment by Zealousideal_Low1287 in [D] SOTA Multiclass Model Calibration by arcxtriy But then it is not guaranteed that the probabilities for a sample sum up to 1. That's seems strange, right?! Permalink Parent 1
arcxtriy OP t1_j2451e2 wrote on December 29, 2022 at 3:05 PM Reply to comment by Zealousideal_Low1287 in [D] SOTA Multiclass Model Calibration by arcxtriy But how would that work? Assume I have predictions for one sample [0.01, 0.03, 0.2, 0.8, 0.04] and for another one [0.3, 0.2, 0.1, 0.1, 0.05]. Do you suggest learning the Platt scaling across samples or across classes? Permalink Parent 1
[D] SOTA Multiclass Model Calibration Submitted by arcxtriy t3_zy5ddz on December 29, 2022 at 1:26 PM in MachineLearning 11 comments 14
arcxtriy OP t1_j24yx4p wrote
Reply to comment by HateRedditCantQuitit in [D] SOTA Multiclass Model Calibration by arcxtriy
If p(dog)=p(cat)=0.5 then it's fine, because it tells me the classified is uncertain. Isn't it?