Data Science Asked on September 5, 2021
According to the API doc, this metric
“Computes how often targets are in the top K predictions.”
But how come the following codes prouce the result 1?
0.95>0.9>0.8>0.1>0.05, both 0.95 and 0.8 lead to 1 in prediction, shouldn’t the result be 2?
m = tf.keras.metrics.TopKCategoricalAccuracy()
m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
print('Final result: ', m.result().numpy()) # Final result: 1.0
Result of tf.keras.metrics.TopKCategoricalAccuracy()
will be between 0 & 1. Default value of the argument k is 5.
The result is 1 because for both the samples, the actual value is within the top 5 predictions.
Answered by Gaurav63 on September 5, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP