Data Science Asked by keyan.r on October 5, 2021
I am using Keras to make a "set" identifier for the card game Set.
Here is my script (some code may be unnecessary but was used during experimentation):
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Activation
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
from keras.optimizers import SGD
from keras.utils import to_categorical
from keras.layers.advanced_activations import LeakyReLU
from keras.regularizers import l2
import os
import glob
import shutil
import numpy as np
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.imagenet_utils import preprocess_input
earlystop = EarlyStopping(monitor='val_acc', min_delta=0.001, patience=5,
verbose=1, mode='auto')
callbacks_list = [earlystop]
# opt = SGD(lr=0.01)
setIdentifier = Sequential()
setIdentifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
setIdentifier.add(MaxPooling2D(pool_size = (2, 2)))
setIdentifier.add(Conv2D(64, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
# setIdentifier.add(Conv2D(64, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
setIdentifier.add(MaxPooling2D(pool_size = (2, 2)))
setIdentifier.add(Flatten())
setIdentifier.add(Dense(units=256, activation = 'relu'))
setIdentifier.add(Dropout(rate=0.5))
setIdentifier.add(Dense(units = 2, activation = "softmax"))
setIdentifier.compile(optimizer = "adam", loss='categorical_crossentropy', metrics = ['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale=1./255)
training_set = train_datagen.flow_from_directory('Data_Training',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('Data_Testing',
target_size = (64, 64),
batch_size = 32,
class_mode = 'categorical')
setIdentifier.fit_generator(training_set,
steps_per_epoch = 486,
epochs = 50,
validation_data = test_set,
validation_steps = 60,
callbacks = callbacks_list)
setIdentifier.save("setIdentifier.h5")
The file Data_Training
has a folder notSet
and a folder Set
, and the file Data_Testing
likewise. There are two times as many notSets
than Sets
in each category.
No matter what I do, in both training and testing phases, the output is always notSet
. I am not sure what is the cause for this.
I have tried a binary_classifier
with the sigmoid activation function and final density layer with 1 node. I have tried changing activation functions and using leakyRelus
.
I have double checked my training steps and test steps.
I have $5184text{ sets }+ 2 cdot 5184text{ notSet = }15552$ train images, which when divided by a batch size of $32$ gets me $486$ steps.
I have $648text{ Sets }+ 2 cdot 648text{ notSet = }60.75$ >> rounded to $60$ validation steps.
I am confused by what is going on with the neural network – why isn’t anything approving and why is it labelling everything with one class?
For more info, here is the Github repo which includes the training images.
Any help is appreciated!
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP