TransWikia.com

Once a predictive model is in production, how it can be evaluated?

Data Science Asked by Mahsaem on July 7, 2021

I have a data science project, predicting customer’s next purchase day. Customer’s one year behavioral data was split to 9 and 3 months for train and test, using RFM analysis, I trained a model with different classifiers and the best one’s result is as follow:

Accuracy of XGB classifier on training set: 0.93

Accuracy of XGB classifier on test set: 0.68

This is my school’s project, and I was wondering, in real world projects, how can we evaluate a model’s performance after it’s on production. How can I measure how successful my model was? What if the performance measures in production are much lower than my test result?

2 Answers

This is in fact a very good question. The answer is simple, but depends on the case. In general, what we do after pushing a model to production we apply an audit process. Let me explain: in reality machine learning models that are being pushed to production are pushed to replace another process (e.g, manual process- this is the case of automation). At the beginning everything predicted by machine learning model are audited through another process (e.g, manual), we call this stage the pilot stage. By comparing the model performance to the manual process we establish the quality of the model. Once we are happy, we start reducing the audit percentage from 100% to 5% or so ( there is some math behind what should be the audit percentage). This audit will never go away and will always be used to measure the performance of the model and to establish ground truth data for new samples that can be added to the training set.

In fact, training models in theory is something and using them in production is something else. It is really a complex process. Just to mention: we also like to implement some protection mechanisms to protect the model. For example data drift detection, uncertainty detection and so on.

Correct answer by Bashar Haddad on July 7, 2021

After you train your model if you want to perform the predictions on new data Something like this can help model.predict(new_data) . If your model performed good on the training data and very badly on the test data the main cause is overfitting(tThe model is overly accurate). Hope it helps.

Answered by ihatecoding on July 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP