Data Science Asked by Santiago Pardal on December 20, 2020
I’m trying to train a model which in my opinion is taking too long compared to other datasets given that it’s taking about 9s to complete a step. I think that the problem is because the dataset is not being stored on ram, but I’m not sure of this.
The code is the following:
def load_data():
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(path1, target_size=(200, 200), batch_size=32, class_mode="binary")
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(path2, target_size=(200, 200),batch_size=32, class_mode="binary")
return train_generator, test_generator
Model:
Fit:
model.fit_generator(x, steps_per_epoch=37, epochs=50, validation_data=y, validation_steps=3, callbacks=[tensorboard])
If you could help me I’d appreciate it. Thank you very much!
Your images are probably too large for using just three convolutions. So the number of parameters is huge during flattening (and the net cannot learn global features of the images, but this is a different problem).
Try to
use more convolution and pooling operations
see if strided convolutions help
check the number of parameters using the summary
method of Keras
Answered by Michael M on December 20, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP