Data Science Asked on December 2, 2020
I have a custom loss function. In order to experiement how the loss is calculated during valiation, I update the loss function as follows:
def custom_loss(y_true, y_pred):
return 0 * tf.reduce_mean(input_tensor=-tf.reduce_sum(input_tensor=y_true_1 * tf.math.log(y_pred_1), axis=-1)) + 1
Now when I run model.evaluate
on my data, tensorflow shows me the loss value as 1.0567
.
To further experiment, I updated the constant value being returned by loss function from 1
to other constant values. Below you can see the table:
I was expecting the final calculated loss to be same as the constant value I passed. But that’s not the case. What is the reason for a constant difference of 0.0567?
I have verified it across three versions of tensorflow: 1.14.0, 1.15.0, and 2.1.0.
On further investigation was able to narrow it down.
The loss value is not only calculated by the response from loss function, but also the various regularizers
added in the intermediate layers.
And one of the FC layers in my model was indeed having a L2 regularization term as below:
model.add(layers.Dense(1024, activation='relu', kernel_regularizer=regularizers.l2(0.1))
If I remove the l2 regularization, the loss goes indeed goes to zero if the loss function returns a zero.
Answered by Vishal on December 2, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP