Data Science Asked on November 17, 2020
I am training a classification CNN on a labeled dataset $langle x,yrangle$. The network reaches a 0.92% accuracy rate on testing and validation sets. After this process, I pre-process the data to simulate some reasonable conditions by potting it through some process $hat{x}=Psi{x}$ and getting a new dataset $langlehat{x},yrangle$ with the same labels. An example of this process is a conversion between color domains, for example, $xin RGB$ and $hat{x}in grayscale$.
In the example I gave, The process may reduce the performance of the network if it using color features that are lost. In my case, no data is lost though the data may be disfigured. After applying $Psi$ I am training a new network with a degradation in performance with only 0.81% accuracy. I was guessing that I am experiencing a concept drift, though I am not sure how to show or visualize this. Also, I do not know how to mend concept drift in NN.
Unfortunately, I cannot reveal the exact case I am working with. Is there a way to analyze or and show the data distribution has changed in the new domain?
How do I address the case described and what can I do to try and improve the performance (regardless of the specific task)?
There are multiple ways to detect covariate shift, for example Kullback–Leibler divergence but my favorite is adverserial validation. For images I would also look at the shap explainer, if its activating on different random places in the test set, you can suspect something fishy in concept.
Remedy concept shift with NN- try batch normalization if you havent. Why? Weights of the deeper layer, learned by the neural net, will be relatively less dependent on the weights learned on the shallower layer, hence avoiding shift.
Answered by Noah Weber on November 17, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP