Data Science Asked by pairon on January 21, 2021
I know that is better avoid loop in Keras
custom loss function, but I think I have to do it.
The problem is the following: I’m trying to implement a loss function that compute a loss value for multiple bunches of data and then aggregate this values in an unique value.
For example I have 6 data entry, so in my Keras
loss I’ll have 6 y_true
and 6 y_pred
.
I want to compute 2 loss value: one for the first 3 elements and one for the last 3 elements.
Example of hypothetical code:
def custom_loss(y_true, y_pred):
start_range = 0
losses = []
for index in range(0,2):
end_range = start_range + 3
y_true_bunch = y_true[start_range:end_range]
y_pred_bunch = y_pred[start_range:end_range]
loss_value = ...some processing on bunches...
losses.append(loss_value)
start_range = end_range
final_loss = ...aggregate loss_value...
return final_loss
Is it possible to achieve something like this? I need to process the whole dataset and calculate loss for multiple bunches and then aggregate all bunches value in a single value.
Assuming 2 partitions since, for index in range(0,2)
and a fixed bunch size of 3 since, start_range + 3
The shape of your y_true should be (nb_samples, 2)
, where the first vector of 2nd dimension is the ground_truth
and the other is partition_index
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.losses import mean_squared_error
y_true = tf.Variable(np.array([[1.5, 0],[1.2, 0],[1.3, 0],[1.6, 1],[3.0, 1],[2.25, 1]]), dtype=tf.float32)
y_pred = tf.Variable(np.array([[1.35],[1.24],[1.69],[1.55],[1.24],[1.69]]), dtype=tf.float32)
def get_two_splits(y_true):
child_y = y_true[:, 1]
child_y = tf.expand_dims(child_y, 1)
y_true = y_true[:, 0]
y_true = tf.expand_dims(y_true, 1)
return y_true, child_y
def some_processing_on_bunches(y_all):
y_true, y_pred = get_two_splits(y_all)
return mean_squared_error(y_true, y_pred)
def custom_loss(y_true, y_pred):
n = K.shape(y_true)[0]
y_true, partitions = get_two_splits(y_true)
partitions = tf.reshape(partitions ,[n, ])
partitions = tf.cast(partitions, tf.int32)
y_elements = tf.dynamic_partition(tf.concat([y_true, y_pred], 1), partitions, 2)
losses = tf.vectorized_map(some_processing_on_bunches, tf.stack(y_elements))
return tf.reduce_mean(losses, axis=1)
K.eval(custom_loss(y_true, y_pred))
>>>array([0.05873336, 1.1379 ], dtype=float32)
Answered by Milind Dalvi on January 21, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP