TransWikia.com

How is BCELoss counted in PyTorch? [different result comparing to mathematical implementation]

Data Science Asked by CapJS on May 18, 2021

I am trying to understand how Binary Cross Entropy is counted in PyTorch.
I’ve tried the same code from the PyTorch documentation here, but I get a different result comparing to mathematical implementation of this function.

Code (I made little changes comparing to documentation):

import torch
import torch.nn as nn
m = nn.Sigmoid()
loss = nn.BCELoss()
input = torch.randn(1, requires_grad=True)
print(input)
print(m(input))
target = torch.empty(1).random_(2)
print(target)
output = loss(m(input), target)
print(output)
output.backward()
print(output)

Output for the code above is:
enter image description here

The mathematical formula for BCELoss is:

output = -[target*log(our_value) + (1-target)*log(1-our_value)]

From the mathematical formula above I should get ‘output’=0.3215 for ‘our_value’=0.4770 and ‘target’=1. But PyTorch shows that the ‘output’=0.7403.

I’ve also found a C code here of this function but the formula is almost the same (only very small epsila should be added and it gives no difference in output). So, it’s still unclear for me.

How does it come to? Any hints are welcome.

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP