TransWikia.com

Custom loss for low false positive rate (higher precision)

Data Science Asked by Inlinesidekick on March 22, 2021

I am working with a scenario where I need to minimize the false positive rate for the minority class. Additionally my dataset is imbalanced. (10% minority class, 90% majority class). I am using the class_weight in the fit function of keras.

Additionally, I would also like to try a custom loss function to see if this makes a difference. A number of solutions online Keras custom loss function as True Negatives by (True Negatives plus False Positives) discuss a specificity/precision etc loss function. However this cannot be derived, so I don’t think this can work.

Any suggestions?

Thanks in advance!

One Answer

You have two options here that I can see:

1) quick and very very dirty - rebalance your data so that negatives are favored in the training data. Then your model will prefer to predict negative rather than positive, and false positives will be suppressed. To see this, consider what happens if all training data were negative. Tune the data balance until you get the desired result (I feel like a bad person for saying this...)

2) use a custom loss function such as the $F_beta$ score (https://en.wikipedia.org/wiki/F1_score). However, this may be numerically tricky to implement for training as the gradient of the naive function generally behaves badly.

Answered by Dave Kielpinski on March 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP