Data Science Asked on March 29, 2021
I’m looking for a machine learning model architecture that takes in an arbitrary number of inputs and generates one output. Pretty much like GRU or LSTM, it’s just that the order of the items in the input is irrelevant. So f([x1, x2, ..., xn])=y
, where each x
is of shape [i]
while y
is of shape [j]
(not considering the batch dimension). And f([x1, x2, ..., xn])=f([xn, xn-1, ..., x1])
or any other order of the input. In other words, f
treats its input as a set, unlike RNNs that treat their inputs as a list.
Is there such an architecture?
You can pass all of the input to a MLP layer, and take the aggregation of the results. It would be independent of the order
Answered by Karan Dhingra on March 29, 2021
If you use the batch dimension, you have an input size (B,C). If your linear layer has input_dims=C, output_dims=1, the output will be size (B,). After that, you can concatenate these outputs to get object size (1,B), where B will be now the number of features.
Answered by Alex on March 29, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP