Data Science Asked by Paul Jurczak on June 2, 2021
I have 3D meshes and textures for a dozen of objects, which have to be detected in synthetic images. I have 2D textures of backgrounds these objects will be visible in front of. Object detection will be performed in 2D images taken inside a 3D simulation (all data is synthetic).
In order to get the training data for DNN, I would over impose varying viewpoint, scale and lighting 2D images of my objects rendered from 3D meshes, over all available background textures at different locations. I’m sure, this obvious method was used by others countless times in the past and presumably there are some tools to automate it. So far I found only a few cases of abandonware. Can someone point me in the right direction and post a link to an easy to use tool, which can do what I described above?
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP