TransWikia.com

How to handle overfitting in the following classification case

Data Science Asked by Kumar Shivendu on April 3, 2021

enter image description here

enter image description here

The confusion matrix is as below :-

[[ 0 0 5 1 0 0]

[ 0 0 19 14 0 0]

[ 0 0 217 151 0 0]

[ 0 0 84 282 0 0]

[ 0 0 6 111 0 0]

[ 0 0 0 10 0 0]]

np.random.seed(0)
classifier = Sequential()
classifier.add(Dense(300,input_dim=11))
classifier.add(Dropout(0.5))
classifier.add(keras.layers.LeakyReLU(alpha=0.86))
classifier.add(Dense(250,activation='tanh',kernel_regularizer=keras.regularizers.l2(0.12)))
classifier.add(Dropout(0.5))
classifier.add(Dense(100,activation='relu'))
classifier.add(Dropout(0.4))
classifier.add(Dense(6,activation='softmax'))



classifier.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])

Thanks for the help

One Answer

I recommend including some more information in your post - you'll be able to get more help that way. What is the size of your dataset? How many epochs are you training for? What is the accuracy and loss value after training?

Nonethless, I recommend taking a look at the early stopping callback.

It will stop training when your model stops improving according to a given metric. I've found this useful when using loss as the metric. You should also definitely monitor the loss and accuracy of your model during each epoch and stop training when they stop improving. Tensorboard, another callback, can help with this.

You might also want to try reducing the number of nodes you have per layer, as a model with more parameters has a greater capacity for memorization, reducing your model's capability to generalize. 300 nodes is a lot to have in the first layer of your model for data with just 11 features, but to be honest everything really all depends on the data.

Answered by ggaugler on April 3, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP