TransWikia.com

Different results by running single model and from the VotingClassifier?

Data Science Asked by gmaravel on January 9, 2021

I have a situation where my model predictions differ when I use the prediction from a single classifier (e.g. SVM) and the sklearn.ensemble.VotingClassifier with only option the same model.

vot = VotingClassifier(estimators=[ ('SVC', model_SVC)], voting='soft')
probas = [c.fit(X, y).predict_proba(X) for c in (model_SVC, vot)]

Then if I return the results

for i in range(2):
    print(probas[i][0], unique_types[np.argmax(prb)])

I get for example (array of probabilities for a 7-class problem):

SVM: [0.01703915 0.00290338 0.00528683 0.00578544 0.89824208 0.01244273
 0.0583004 ] Class_4
Vot: [0.01604109 0.00273581 0.00497365 0.00547579 0.90292339 0.00859484
 0.05925544] Class_4
...
...
SVM: [0.27039153 0.03815588 0.30307504 0.02600352 0.03358154 0.26581178
 0.06298071] Class_2
Vot: [0.23473323 0.03981095 0.25290641 0.02860963 0.03515882 0.34675459
 0.06202637] Class_5

Why do I get slightly different values? In the first case the differences are small and they do not affect the end classification (although I wouldn’t expect that much of a difference!) but on the second case the classification result changes…

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP