Data Science Asked by AstroAllie on March 25, 2021
Let’s say I have trained a classifier that classifies images of animals into 10 different classes. And let’s say that I have 20 different images of a particular animal and because I know the photographer, I know with certainty that all 20 images are of the same animal. So I use my classifier to make a prediction on what animal it is and get 20 predictions one for each image. The model predicts all the images to be a dog with varying probabilities.
image 1: 80% dog
image 2: 90% dog
image 3: 75% dog
and so on.
What is the probability that the animal in question is a dog?
Let’s say they predict cat with smaller probabilities, 5%, 2%, 4% … What is the probability it is a cat?
I’ve tried a few different approaches, applying Bayes Theorem but I keep getting numbers that add up to be more than one. Could it really be just the average?
If you feed the model with 20 images for testing then your output should look like array of [20x10]
. For each row represents the probabilities of all classes(in your case 10). Let's see the example below, and 1-index is dog with 0.9 probability, so your model classified it correctly. If your model classifies 19 times dog-correct and 1 times cat-wrong then your model's test accuracy will be calculated as correct_preds / (correct_preds+wrong_preds)
[[0.0, 0.9, 0.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.8, 0.0, 0.0, 0.0, 0.1, 0.0, 0.1, 0.0, 0.0],
..,
[0.0, 0.75, 0.0, 0.0, 0.0, 0.1, 0.0, 0.1, 0.05, 0.0]]
Answered by yakhyo_ on March 25, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP