Data Science Asked by SmallChess on March 24, 2021
I’m training a standard CNN. Attached my training curve. My model:
Model: "functional_35"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_18 (InputLayer) [(None, 120, 120, 3)] 0
_________________________________________________________________
conv2d_28 (Conv2D) (None, 118, 118, 32) 896
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 59, 59, 32) 0
_________________________________________________________________
dropout_23 (Dropout) (None, 59, 59, 32) 0
_________________________________________________________________
flatten_17 (Flatten) (None, 111392) 0
_________________________________________________________________
dense_17 (Dense) (None, 256) 28516608
_________________________________________________________________
visualized_layer (Dense) (None, 1) 257
=================================================================
Total params: 28,517,761
Trainable params: 28,517,761
Non-trainable params: 0
_________________________________________________________________
My data dimension:
Is my model overfitting, if so, what’s my best strategy now?
Your train/validation loss curves are a classic example of overfitting.
It looks like you have 1425 data samples to train a model with > 28 million parameters.
I would suggest trying any/all of the follow:
If you happen to be using image data, you might take a look at the Keras ImageDataGenerator, which can do things like flip/rotate your images.
Correct answer by n1k31t4 on March 24, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP