# How to understand the network structure in this paper( a multiple timeseries fusion model )

Data Science Asked by Mithril on December 5, 2020

I want to implement this paper: https://dl.acm.org/doi/pdf/10.1145/3269206.3271794

It has structure graph :

But I can’t understand how to generate Multiple Resolution Tensor R .

I understand all the steps except the detail of step 3:

1. we have 3 time series when α, β ∈ {day : 1,week : 7} , so dr = 2

we can generate dr*(dr +1)/2 unique time series with different configurations of {⟨α, β⟩|α ∈ α, β ∈ β, α ≥ β} to represent multiple periodic time series distributions, in which dr = min(|α |, |β|).

1. we need 3 gru/lstm to generate 3 hidden state with ds length

2. Generate Multiple Resolution Tensor R

Inspired by these recent advances, we propose a convolutional fusion
framework to summarize a multi-resolution tensor $$R ∈ R^{|α |× |β |×ds}$$ into a conclusive representation. Here we use R to represent a
collection of time-evolving patterns generated from multiple time
resolutions. Namely, $$R_{i,j,1:ds}$$ denotes a learned sequence
pattern representation $$h_t^{αi, βj}$$ at time t w.r.t. temporal
resolution αi and interval resolution βj as described in Eq. 2. We
further apply a mirror padding along the diagonal to complete the
tensor. The generation process of R is shown in Figure 1.(b).

1. Apply CONV2D
2. Apply CONV2D
3. Apply FC

# The ambiguous part is step 3 ! :

1. For me, $$R ∈ R^{|α |× |β |×ds}$$ shape is [dr*(dr +1)/2, ds] , so I have to add one dim to [dr*(dr +1)/2, ds, 1] to make it can be pass to conv2d , is this right ?
2. But the Figure 1 write the ds (gru hidden state length) as channels , and it also draw the channels size has to be divide by 2 in every conv2d layer .

What is the correct way to generate Multiple Resolution Tensor R ?

## PS

I wrote a dummy model, you could reuse this :

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

data = pd.DataFrame(np.random.uniform(size=(1000,3)), columns=['Sales', 'SalesDiff7', 'SalesAggMean7'])

multi_inputs = []
multi_outputs = []
window_size = k

for i in range(data.shape[1]):
ti = keras.Input(shape=(window_size, 1), name=f't{i}')
tlstm = layers.LSTM(32)(ti)
multi_inputs.append(ti)
multi_outputs.append(tlstm)

r = tf.stack(multi_outputs, axis=-2)
r = tf.expand_dims(r, -1)
conv1 = tf.keras.layers.Conv2D(filters=16,
activation='relu')(r)
conv2 = tf.keras.layers.Conv2D(filters=8,