Data Science Asked by Andrew Furman on November 5, 2021
I’m trying to find an algorithm that would fit this use case:
My data: a bunch of fixed-size integer arrays, e.g.
[0,2,3,4,5]
[1,2,3,1,5]
[4,1,2,4,5]
...
Input: an array of integers, and output: prediction for the rest of the array, e.g.
[0,1] -> [2,4,2]
[3] -> [1,3,2,5,5]
[3,2,4,1] -> [4]
So, the question is how to model the following problem: input a sub section of a sequence, then output the rest of the sequence. This is a sequence-to-sequence problem.
For this, as Derek O has eluded to, I would suggest a encoder-decoder architecture. In the encoder, you would encode your input sequence (using RNN/LSTM), which will give you a "hidden representation", then pass your hidden representation through a decoder (RNN/LSTM) to then decode the hidden representation into the resining sequence of integers.
Here is an article that provides more detail around encoder-decoder models: https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346
Answered by shepan6 on November 5, 2021
It depends on what these numbers represent.
If there is sequential / time dependence (i.e. the subsequent outputs depend on previous inputs) I would suggest an LSTM-RNN.
Answered by Derek O on November 5, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP