The images are identical except for the presence of the stripe on the side.
I am trying to use a classify the images into 2 classes: greenStripe, noGreenStripe.
I tried to use tensorflow retrain with a small dataset (~40 pictures in each class and batch size of 8) but the results where really bad. I am afraid to commiting to training using more data as it is time consuming.
What do you suggest? Is there a better approach or does the problem lie in the small training dataset?
I propose the following pipeline:
See the handouts for class 14.
See handouts for class 6.
Answered by Pedro Henrique Monforte on December 3, 2020
1) Could you upload sample images maybe? It would be easier to decide.
2) Your dataset is very small, training anything significant from scratch will most certainly overfit the model. Take an existing model, that knows what a bag is (e.g. Mask R-CNN) and finetune it to your problem by changing the loss function and some architecture.
3) Actual framework should not matter: work with whichever you find convenient.
Answered by Alex on December 3, 2020
The scientific answer would be, it depends.
In case you are using any kind of Deep net, then 40 images is far too little. It might be helpful to describe your problem setting a little bit more in depth. Are the bags always in the same place, or do they need to be localized first? These kind of details could help other users in their recommendations.
As a first approach, before you try a deep net or any kind of ML I would try a simple baseline first. Do you know what the exact pixel value of your green stripe is? You could then simply check whether this colour is present at all. This is rather coarse, but I would see how far this gets you and it is good to see whether your ML methods can beat this simple baseline. Subsequently you could also think of trying to localize the bagtags (in whatever way you like) then cropping it and checking for the presence of this green stripe.
Answered by Felix van Doorn on December 3, 2020
Get help from others!