Data Science Asked by Makintosz on February 17, 2021
I am using feedforward neural network for regression and what I get as a result of prediction is a constant value visible on the graph below:
Data I use are typical standardised tabular numbers. The architecture is as follows:
model.add(Dense(units=512, activation='relu', input_shape=x_train.shape[1],)))
model.add(Dropout(0.2))
model.add(Dense(units=512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(units=128, activation='relu'))
model.add(Dense(units=128, activation='relu'))
model.add(Dense(units=1))
adam = optimizers.Adam(lr=0.1)
model.compile(loss='mean_squared_error', optimizer=adam)
reduce_lr = ReduceLROnPlateau(
monitor='val_loss',
factor=0.9,
patience=10,
min_lr=0.0001,
verbose=1)
tensorboard = TensorBoard(log_dir="logs{}".format(NAME))
history = model.fit(
x_train,
y_train,
epochs=500,
verbose=10,
batch_size=128,
callbacks=[reduce_lr, tensorboard],
validation_split=0.1)
It seems for me that all weights are zeroed and only constant bias is present here, since for different data samples from a test set I get the same value.
I understand that the algorithm has found smallest MSE for such a constant value, but is there a way of avoiding such situation, since straight line is not really good solution for my project?
[EDIT]
Adding learning curves for trainging and validation sets.
Lowering the initial learning rate from 0.1 to 0.005 helped the problem.
adam = optimizers.Adam(lr=0.005)
Answered by Makintosz on February 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP