Data Science Asked by hamza boulahia on February 20, 2021
Context:
I have been experiencing with some Kaggle datasets to learn more about image classification. So, in this binary image classification task, I tried something that I thought would increase the accuracy of the model but then It didn’t. I don’t understand why exactly that did it decrease so I am just looking for some sort of explanation.
Problem statement:
I used a CNN sequential model with 7 conv layers followed by 1 dense layer and finally 1 sigmoid unit for the output.
After some hyperparameter tuning, the model reached 90% accuracy on the test set.
Then I thought about using and XGBoost model in place of the last output layer. I achieved that by using feature extraction from the Keras model. My intuition was that at worst the accuracy would stay the same as before, however the accuracy on the test set dropped to 88%!
I couldn’t understand why exactly does the accuracy drop since the last output layer in the CNN was basically a sigmoid unit which works as if we have a logistic regression model, and from my minimal experience, XGBoost usually gives better results than logistic regression.
Any insight or explanation would be appreciated, and thanks!
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP