Data Science Asked on February 27, 2021
I am training a model to detect buildings from satellite images in rural Africa. For labels, I use OpenStreetMap geometries. I use the Tensorflow Object Detection API and SSD Inception V2 as a model. I trained separate models on two different datasets (in different geographical regions). In one area, the model learns and quiet quickly converges:
When training the model in the other area, however, this happens:
Note that I use the exact same model, configuration, batch size, the training area is of the same size, etc. In the second case, the model’s prediction change extremely rapidly and I am not able to see why. For example, here is a comparison of the predictions the model makes at 107k and 108k global steps:
I am quite new to deep learning and cannot understand why this might happen. I would be very grateful for any tips where to look, etc. It might be something simple that I am overlooking.
Let me know if I should provide more information
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP