Data Science Asked on August 4, 2021
I’m working on developing a model with a highly imbalanced dataset (0.7% Minority class). To remedy the imbalance, I was going to oversample using algorithms from imbalanced-learn library. I had a workflow in mind which I wanted to share and get an opinion on if I’m heading in the right direction or maybe I missed something.
Does this process sound reasonable? Would appreciate any feedback/suggestions
I am not sure if in the last point, you meant the validation set instead of the testing set.
Here is my advice: 1- understand the impact of having data imbalance. Let start with understanding the difference between overall accuracy and average class accuracy. If you only care about overall accuracy, then data imbalance is not a problem, else you need to handle the data imbalance problem.
2- the data distribution of training set can be changed by using oversampling. Undersampling, synthetic sampling, data augmentation... etc. BUT you should NOT change the data distribution of the validation and the testing sets.
3- use the training set for training, the validation set for tuning the hyper parameters , BUT do not touch the testing set
4- use the testing set for testing only
5- you can control the behavior the model by controlling the data distribution, you do not need to have fully balanced data, you can control the oversampling process in a way to control the behavior of the model without using a threshold.
Answered by Bashar Haddad on August 4, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP