Data Science Asked by neel g on February 23, 2021
I am building an autoencoder with help from this site. There I was trying to build an autoencoder for my own custom data. My images are stored in a folder IMG and have names like 0.jpg, 1.jpg, 2.jpg.....
I tried to develop an iterator to iterate over all my images but the problem arises that when I convert all of my 124 images in a single training_data
array the model responds that it expected a single array yet 124 arrays were given to it. Can anyone tell me how I should write the iterator? I tried using the keras flow_from_directory
function from the “Machine Learning mastery” website but it shows 0 images from 0 classes
.
Here is my code:–>
import tensorflow as tf
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K
from keras.callbacks import TensorBoard
import numpy as np
from PIL import Image
i = int(0)
images_dir = "/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/{}.jpg".format(i)
training_data = []
while i < 125:
print("working on ", i, 'file')
image = Image.open(images_dir)
pic_array = np.asarray(images_dir)
training_data.append([pic_array])
i += 1
input_img = Input(shape=(600, 400, 3)) # adapt this if using `channels_first` image data format
x = Conv2D(48, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
x = Conv2D(24, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(24, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(48, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
autoencoder.fit(training_data,
epochs=50,
batch_size=128,
shuffle=True,
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
Also I want the images to retain the ‘color’ feature so I am using the input shape as
(600,400,3)
because RGB is on 3 channels. is it correct?
I would have simply used my iterator but it is my understanding that I need a different function that communicates to the model and gives it images one-by-one while I am just loading them all in a single variable. So can anyone help me with this?
Here is the full TraceBack:-
Traceback (most recent call last):
File "autoencoder.py", line 46, in <module>
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training.py", line 1154, in fit
batch_size=batch_size)
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "/home/awesome_ruler/.local/lib/python3.7/site-packages/keras/engine/training_utils.py", line 109, in standardize_input_data
str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 125 arrays: [array([['/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/0.jpg']],
dtype='<U76'), array([['/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/0.jpg']]...
Your images_dir
actually seems to be the path to a single image... but nevertheless,
I would simply create a single number array with shape: (num_images, height, width, channels)
by doing the following:
import os
import numpy as np
# Root directory holding all images (I recommend removing the space in "Atom projects")
images_dir = "/home/awesome_ruler/Documents/Atom projects/Compression_enc/Images/IMG/"
# Number of images you want to load
N= 125
# Get all paths and take the first N
n_image_paths = sorted([f.path for f in os.scandir(data_root)])[:N]
# Load the images using on of the variants of loading images
images = np.array([Image.open(f) for f in n_image_paths])
# Be careful with variants: the order of channels is different for different methods!
# images = np.array([plt.imread(f) for f in n_image_paths]) # Matplotlib
# images = np.array([cv2.imread(f) for f in n_image_paths]) # OpenCV
Now images can be passed directly to your model:
autoencoder.fit(x=images,
epochs=50, ...)
Correct answer by n1k31t4 on February 23, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP