TransWikia.com

How could a considerable increase in loss leads to an improvement in accuracy?

Data Science Asked on August 16, 2021

I’m experimenting with NLP and at the moment, I’m trying to come up with a translator model for converting English sentences to French counterparts. I’m using this dataset (not that it’s relevant):

https://github.com/udacity/deep-learning/raw/master/language-translation/data

which is composed of more than 137K sentences. My model is an encoder-decoder LSTM with attention implemented in Keras. And here are my plotted validation loss and accuracy charts:

enter image description here

enter image description here

The two accuracy metrics are custom ones developed by myself. But they are based on the same categorical_accuracy from Keras.

Now, my question is why I’m getting an improvement for the accuracy while the loss value is getting wrose?

Also, is such a model trustworthy?

One Answer

Check whether padded values get accounted while computing accuracy, which shouldn't be the case. You should create mask for padded values and use that while computing accuracy. For instance, while the y_true value for padded region is 0, and 0 as well for y_pred, then y_true == y_pred counts as a correct prediction and impacts the overall accuracy. This is wrong, and what you should instead do is define a custom accuracy metric that does the following:

correct = y_pred == y_true
mask = tf.cast(tf.cast(y_true, tf.bool), tf.int16)
accuracy = tf.reduce_sum(correct * mask)/tf.reduce_sum(mask)

Hopefully this should give you a correct measure.

Answered by VM_AI on August 16, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP