Data Science Asked on September 29, 2021
In the paper "A NOVEL FOCAL TVERSKY LOSS FUNCTION WITH IMPROVED ATTENTIONU-NETFOR LESION SEGMENTATION" the author use deep supervision by outputing multiple outputmask which have different scale.
I do not understand how it can work with regards to the loss function. y_pred and y_true doesnt share the same dimension exept for the final output.
model = Model(inputs=[img_input], outputs=[out6, out7, out8, out9])
the input seems to only be the one with real resolution
I checked the code (https://github.com/nabsabraham/focal-tversky-unet/blob/master/newmodels.py) and I haven’t see anything special that would make it works. The loss function doesn’t explicitly handle it neither.
As you can see in lines 286-296 in newmodels.py
the model can use two different loss functions for the four different outputs.
loss = {'pred1':lossfxn,
'pred2':lossfxn,
'pred3':lossfxn,
'final': losses.tversky_loss}
loss_weights = {'pred1':1,
'pred2':1,
'pred3':1,
'final':1}
model.compile(optimizer=opt, loss=loss, loss_weights=loss_weights,
metrics=[losses.dsc])
The first three outputs of the model, out6
, out7
, and out8
uses the loss function, lossfxn
, which is given as the third argument to the attn_reg
function. In isic_train.py
, this happens to be the Focal Tversky Loss. For the final output of the UNet (out9
) the Focal Tversky Loss is always used. The total loss for the model is then the weighted sum of the different losses for the four model outputs, which is simply equal to the sum given that all weights are set to 1 in loss_weights
.
Correct answer by Oxbowerce on September 29, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP