Data Science Asked by Uberfatty on May 11, 2021
The context
I have a 3D array (representing a grayscale 3D image) and want to turn this into another 3D array of the same size. In this output array the value of each pixel is a number that measures how likely the corresponding point in the input array is to be a sort of edge. So it is kind of like an edge detector (such as the Sobel filter), only it is very specific to this situation. Several formulas have been designed for this specific situation to accomplish this, but I would like to do it with a neural net. In order to train this net, I obtained some examples of 3D images and their corresponding 3D outputs that have been obtained through such complicated filter methods.
My approach
I want to use transfer learning (since my dataset is not very big) and most pre-trained models are for 2D image classification. Therefore I plan to split the 3D images into slices of 2D images and basically solve the same problem in 2D. I know this will throw away some information, but it might still be good enough. For the input image I could also take 3 adjacent slices and put them into 3 RGB channels, so that I throw away less information. As a model I would take one of those pre-trained 2D image classification models, only keep some of the convolutional layers in the beginning of the network and then add an extra layer that outputs an image of the correct size. I’m using Keras for this.
My problems
I currently have two practical problems with this approach:
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP