TransWikia.com

How to set class weights for imbalanced classes in Keras?

Data Science Asked on December 1, 2020

I know that there is a possibility in Keras with the class_weights parameter dictionary at fitting, but I couldn’t find any example. Would somebody so kind to provide one?

By the way, in this case the appropriate praxis is simply to weight up the minority class proportionally to its underrepresentation?

8 Answers

If you are talking about the regular case, where your network produces only one output, then your assumption is correct. In order to force your algorithm to treat every instance of class 1 as 50 instances of class 0 you have to:

  1. Define a dictionary with your labels and their associated weights

    class_weight = {0: 1.,
                    1: 50.,
                    2: 2.}
    
  2. Feed the dictionary as a parameter:

    model.fit(X_train, Y_train, nb_epoch=5, batch_size=32, class_weight=class_weight)
    

EDIT: "treat every instance of class 1 as 50 instances of class 0" means that in your loss function you assign higher value to these instances. Hence, the loss becomes a weighted average, where the weight of each sample is specified by class_weight and its corresponding class.

From Keras docs:

class_weight: Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only).

Correct answer by layser on December 1, 2020

I use this kind of rule for class_weight :

import numpy as np
import math

# labels_dict : {ind_label: count_label}
# mu : parameter to tune 

def create_class_weight(labels_dict,mu=0.15):
    total = np.sum(labels_dict.values())
    keys = labels_dict.keys()
    class_weight = dict()

    for key in keys:
        score = math.log(mu*total/float(labels_dict[key]))
        class_weight[key] = score if score > 1.0 else 1.0

    return class_weight

# random labels_dict
labels_dict = {0: 2813, 1: 78, 2: 2814, 3: 78, 4: 7914, 5: 248, 6: 7914, 7: 248}

create_class_weight(labels_dict)

math.log smooths the weights for very imbalanced classes ! This returns :

{0: 1.0,
 1: 3.749820767859636,
 2: 1.0,
 3: 3.749820767859636,
 4: 1.0,
 5: 2.5931008483842453,
 6: 1.0,
 7: 2.5931008483842453}

Answered by J.Guillaumin on December 1, 2020

You could simply implement the class_weight from sklearn:

  1. Let's import the module first

    from sklearn.utils import class_weight
    
  2. In order to calculate the class weight do the following

    class_weights = class_weight.compute_class_weight('balanced',
                                                     np.unique(y_train),
                                                     y_train)
    
  3. Thirdly and lastly add it to the model fitting

    model.fit(X_train, y_train, class_weight=class_weights)
    

Attention: I edited this post and changed the variable name from class_weight to class_weights in order to not to overwrite the imported module. Adjust accordingly when copying code from the comments.

Answered by PSc on December 1, 2020

I found the following example of coding up class weights in the loss function using the minist dataset. See link here.

def w_categorical_crossentropy(y_true, y_pred, weights):
    nb_cl = len(weights)
    final_mask = K.zeros_like(y_pred[:, 0])
    y_pred_max = K.max(y_pred, axis=1)
    y_pred_max = K.reshape(y_pred_max, (K.shape(y_pred)[0], 1))
    y_pred_max_mat = K.equal(y_pred, y_pred_max)
    for c_p, c_t in product(range(nb_cl), range(nb_cl)):
        final_mask += (weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
    return K.categorical_crossentropy(y_pred, y_true) * final_mask

Answered by CathyQian on December 1, 2020

class_weight is fine but as @Aalok said this won't work if you are one-hot encoding multilabeled classes. In this case, use sample_weight:

sample_weight: optional array of the same length as x, containing weights to apply to the model's loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. In this case you should make sure to specify sample_weight_mode="temporal" in compile().

sample_weights is used to provide a weight for each training sample. That means that you should pass a 1D array with the same number of elements as your training samples (indicating the weight for each of those samples).

class_weights is used to provide a weight or bias for each output class. This means you should pass a weight for each class that you are trying to classify.

sample_weight must be given a numpy array, since its shape will be evaluated.

See also this answer.

Answered by Charly Empereur-mot on December 1, 2020

Adding to the solution at https://github.com/keras-team/keras/issues/2115. If you need more than class weighting where you want different costs for false positives and false negatives. With the new keras version now you can just override the respective loss function as given below. Note that weights is a square matrix.

from tensorflow.python import keras
from itertools import product
import numpy as np
from tensorflow.python.keras.utils import losses_utils

class WeightedCategoricalCrossentropy(keras.losses.CategoricalCrossentropy):

    def __init__(
        self,
        weights,
        from_logits=False,
        label_smoothing=0,
        reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE,
        name='categorical_crossentropy',
    ):
        super().__init__(
            from_logits, label_smoothing, reduction, name=f"weighted_{name}"
        )
        self.weights = weights

    def call(self, y_true, y_pred):
        weights = self.weights
        nb_cl = len(weights)
        final_mask = keras.backend.zeros_like(y_pred[:, 0])
        y_pred_max = keras.backend.max(y_pred, axis=1)
        y_pred_max = keras.backend.reshape(
            y_pred_max, (keras.backend.shape(y_pred)[0], 1))
        y_pred_max_mat = keras.backend.cast(
            keras.backend.equal(y_pred, y_pred_max), keras.backend.floatx())
        for c_p, c_t in product(range(nb_cl), range(nb_cl)):
            final_mask += (
                weights[c_t, c_p] * y_pred_max_mat[:, c_p] * y_true[:, c_t])
        return super().call(y_true, y_pred) * final_mask

Answered by Praveen Kulkarni on December 1, 2020

from collections import Counter
itemCt = Counter(trainGen.classes)
maxCt = float(max(itemCt.values()))
cw = {clsID : maxCt/numImg for clsID, numImg in itemCt.items()}

This works with a generator or standard. Your largest class will have a weight of 1 while the others will have values greater than 1 depending on how infrequent they are relative to the largest class.

Class weights accepts a dictionary type input.

Answered by Allie on December 1, 2020

Here's a one-liner using scikit-learn

from sklearn.utils import class_weight
class_weights = dict(zip(np.unique(y_train), class_weight.compute_class_weight('balanced',
                                                 np.unique(y_train),
                                                 y_train))) 

Answered by samurdhilbk on December 1, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP