Data Science Asked on March 5, 2021
I have trained a deep learning model for regression. The accuracy of the model is poor. I am quite new to deep learning. How can I improve it? The target variable Y
is obtained by multiplying the features X1
and X2
.
DataSet(5800 rows)
X1 | X2 | Y
1.000000 70.000000 70.000000
0.714286 29.045455 20.746753
0.000000 35.000000 0.000000
0.538462 22.071429 11.884615
0.000000 54.000000 0.000000
Model
#Define a larger model
def larger_model():
#Create Model
model = Sequential()
model.add(Dense(2, input_dim=2, kernel_initializer='normal', activation='relu'))
model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
#Compile Model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
#Evaluate Model
estimator = KerasRegressor(build_fn=larger_model, epochs=10, batch_size=5)
kfold = KFold(n_splits=10)
results = cross_val_score(estimator, X, y, cv=kfold)
print("Results: %.5f (%.5f) MSE" % (results.mean(), results.std()))
Output
Results: -83.81452 (170.38108) MSE
First have a look at your data scale. Deep learning models are sensitive to data scaling so it would be better to preprocess your data to keep in within accetable ranges:
Second, deep learning models are regularised overfitting models. One thing you can try here is to progressively increase the size of your model (i.e. more hidden layers or hidden units) and also add regularisation.
Your options are:
weight decay - keras Dense layers can be augmented with weight decay. Have a look at the documentation
layer regularisation (batch norm, dropout, layer normalisation) these are a bit more advanced and work on a case by case basis - have a look at this paper as a start.
Answered by RonsenbergVI on March 5, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP