Data Science Asked by Sandeep Bhutani on August 19, 2020
I am sure this is a most common problem, but would like to know by experts on how to tackle it. Note that, I mostly deal with textual data (NLP problems).
When a supervised learning model is created, say a text classifier, and it works well on seen data then we deploy the model in production (you can think of a chatbot also).
But in real time, when new type of data comes where the prediction fails, we find that a new word or new pattern is breaking the model. So we go ahead and retrain the model with new encountered data. This is where the continuous learning problem starts.
Can ML/NLP veterans please suggest some alternates to solve this labor work? Following approaches have been tried and the problems also listed:
What you are describing is called auto-adaptive learning. This is what most recommendation systems use to adapt to ever changing data and feedback. It is also known as autoML. This https://towardsdatascience.com/how-to-apply-continual-learning-to-your-machine-learning-models-4754adcd7f7f Article does a good job of explaining it. Based on what your data looks like, you might have to choose the appropriate retraining strategy and do a staggered deployment.
Answered by tehem on August 19, 2020
One Solution is "Human in the Loop" with Sentence Encoder. You can use hybrid approach using cosine similarity + Topic modelling + fuzzywuzzy + Bert. I totally understand the NLP world and the kind of problem you are asking. There is no single straight through solutions. And then use voting mechanism to filter out the best resolution.
Answered by Syenix on August 19, 2020
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP