Data Science Asked on February 1, 2021
I would like to adapt my dice loss function for multi-class averaging. y_true and y_pred are one-hot 3D images (mask of label). For example, if the flattened tensors contain the X classes sequentially, I would like to split them into X equal lengths, calculate their respective losses and average them. Note that it need to work in batch.
Here is my code;
def dice_coef(y_true, y_pred, smooth=1e-10):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
# here I would like to split y_true_f and y_pred_f in X sub tensor
# of equal length. The following code would then be performed on each
# sub tensor and averaged to output the mean-class wise loss.
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
How could I implement that ?
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP