Data Science Asked by Josselin Tobelem on March 9, 2021
This is a pre-project question.
I would like to find or build a biased dataset to demonstrate what happens if training data are biased (biased distributed ethnicity for exemple).
I try this for the dataset (also this one) and Azure custom vision for the training.
My question is: if I remove one part of the images, would it be enough to bias the dataset and make the prediction correct only for subsets of the original dataset?
There is both subjective and objective approaches to removing bias ( de-baising techniques ) from the training datasets. It is observed that the sources of bias generally arise from the following data quality issues.
The essential steps for de-biasing require identifying the subspace of bias within the dataset. Then the key task is to neutralise and soften the bias if not completely equalising them.
There are interesting approaches for de-biasing image datasets. Deb-face is trying to address the problem in automated face recognition algorithms by constructing a de-biasing adversarial neural network that learns to extract disentangled feature representations for both unbiased face recognition and demographics estimation. Adversarial learning is adopted to minimise the correlation among feature factors so as to reduce the bias influence. Please refer to the approach paper in the following site.
Answered by Gokul Alex on March 9, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP