Data Science Asked by breads_dead on November 27, 2020
I am looking to solve a multi-class classification problem with long sequences of text with some rows having 1000’s of tokens. Some of the state of the art methods such as BERT have a token limit and I was wondering what is currently being done to handle longer text sequences when dealing with classification?
Traditional methods don't have such a limit: Naive Bayes, SVM, decision trees...
Also see https://stackoverflow.com/questions/58636587/how-to-use-bert-for-long-text-classification
Answered by Erwan on November 27, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP