Data Science Asked on January 4, 2022
I am trying to build a CNN based image recognition system for the Tensorflow malaria dataset. I loaded the dataset (~27k RGB images) using conventional tensorflow_datasets syntax.
After some data exploration, I found that all the images are not of the same size. A print statement for few instances is shown in below snippet:
import tensorflow_datasets as tfds
ds_train, ds_info = tfds.load('malaria', split='train', as_supervised=True,with_info=True)
ds = ds_train.take(5) #selecting 5 images
for image, label in tfds.as_numpy(ds):
print(type(image),image.shape, type(label), label)
OUTPUT
<class 'numpy.ndarray'> (103, 103, 3) <class 'numpy.int64'> 1
<class 'numpy.ndarray'> (106, 121, 3) <class 'numpy.int64'> 1
<class 'numpy.ndarray'> (139, 142, 3) <class 'numpy.int64'> 0
<class 'numpy.ndarray'> (130, 118, 3) <class 'numpy.int64'> 1
The varying sizes of images across the dataset affect the initial CNN layer as flattening each image tensor yields a different sized array.
I understand that all the images need to be converted to a common aspect ratio before the modelling step and we can achieve that by using padding or some other preprocessing techniques of keras.preprocessing.image but I am not sure about the steps to efficiently implement it.
I will be grateful if someone could provide an elegant way around this.
Thank you in advance!
#Here image is your batch.
# Add "batch" and "channels" dimensions
image = image[tf.newaxis, ..., tf.newaxis]
image.shape.as_list() # [batch, height, width, channels]
tf.image.resize(image, [height,width])[0,...,0].numpy()
Answered by SrJ on January 4, 2022
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP