TransWikia.com

An Artificial Neuron Network (ANN) with an arbitrary number of inputs and outputs

Data Science Asked on March 25, 2021

I would like to use ANNs for my problem, but the issue is my inputs and outputs node numbers are not fixed.

I did some google search before asking my question and found that the RNN may help me with my problem. But, all examples which I’ve found are somehow have defined number of input and output nodes.

So, I’m looking for a strategy, how to make it real or at least some examples, preferable in Keras or PyTorch.

More details about my issue:

I have two inputs lists, where the length of the first one is fixed and equals two, f.e.:

in_1 = [2,2] 

but the length of the second list is flexible, the length can be from three to inf, f.e.:

in_2 = [1,1,2,2]

or

in_2 = [1,1,1,2,2,2,3,3,3]

Also, input lists depend on each other. The first list shows the dimension of the output list. So if in_1 = [2,2], means the output must have a possibility to be reshape to [2,2] form.

Currently, I’m thinking to combine two input list into one:

in = in_1 + in_2 = [2, 2, 1, 1, 2, 2]

Moreover, the output has the same length as the in_2 list, f.i.:

if input lists are:

in_1 = [2, 2]
in_2 = [1, 1, 2, 2]

Output should be:

out = [1, 2, 1, 2]

Any ideas are welcome!

3 Answers

The answer may depend on the significance of the length of the input vector or how it originates.

However, the simplest solution is usually to know the largest size input and use that as number of vectors. If the given input has lesser length, you can do the padding with zeros or appropriate symbols. So instead of having a vector $[1, 2, 3]$ and $[1, 2, 2, 3]$ you can have vectors $[1, 2, 3, 0]$ and $[1, 2, 2, 3]$.

The same can apply for output. If output expected is $[1, 2, 1]$ and $[1, 3, 4, 1]$ you can treat the first output as $[1, 2, 1, 0]$

Isn't this just a hack?

Typically, Neural networks does function approximation. Ideally, it represents vectors (matrices) as input and vectors (matrices) as output. This is why it is always desirable that size of your input vector be fixed.

Correct answer by Dipan Mehta on March 25, 2021

Knowing that the first list is pretty much invariant (just describing a certain geometry) you could also try creating many different, specialized NN for every distinct in_1 configuration and use only in_2 for feeding the network.

So in_1 could drive different networks.i.e.

in_1=[1,1]? --> NN #1 (n1) --> (o1)
in_1=[2,1]? --> NN #2 (n1,n2) --> (o1,o2)
in_1=[2,2]? --> NN #3 (n1,n2,n3,n4) -> (o1,o2,o3,o4)

In a first step you determine the configuration (i.e. creating a dict) and then train/feed the specialized networks accordingly.

Answered by Jürgen Schwietering on March 25, 2021

I think you might have misunderstood the fixed number of inputs for the RNN. This is the number of inputs per timestep. All your examples have a fixed number of inputs per timestep: 1! You feed them one at a time to your neural network, finishing with a special "end" token (you could always have a second input for this). Teach it to give no output until it sees the end token, and then to output the components of the result one at a time, ending with a special end output token.

Answered by Arthur Tacca on March 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP