TransWikia.com

How to get an interpolation weight from a mathematical definition

Signal Processing Asked on November 5, 2021

It was recently explained to me that a “Nearest neighbor” kernel for 1D interpolation can be implemented like this using NumPy

def nearest(delta):
    delta = abs(delta)
    if delta <= 0.5:
        return numpy.asarray([0,1])
    else:
return numpy.asarray([1,0])

Whereas the mathematical definition of nearest neighbor is

$h_{nn}(delta) =
begin{cases}
1 & text{if}& -0.5 le delta < 0.5 \
0 & otherwise \
end{cases}
$

Similarly linear interpolation, which in NumPy can be expressed as

def linear(delta):
    delta = abs(delta)
return [delta,1-delta]

But the mathematical definition for it goes

$h_{lin}(delta) =
begin{cases}
1-|delta| &text{if}& 0 le |delta| < 1 \
0 & text{if}& 1 le |delta|
end{cases}
$

My question is how to form a kernel with weights out of these mathematical definitions, as the code belonging to them does not paint the same picture that the mathematical definitions paint.

One Answer

I'll give a concrete example. Let's say you have a signal that was sampled every second. The frequency is $f = frac{1}{1} = 1 Hz$.

Time: $T = begin{bmatrix}0 & 1 & 2 & 3end{bmatrix}$

Value: $X = begin{bmatrix}1 & 2 & 3 & 4end{bmatrix}$

We want to increase the frequency by $2$ i.e. $2 Hz$ (sample every 0.5 seconds).

Time: $tilde{T} = begin{bmatrix}0 & 0.5 & 1 & 1.5 & 2 & 2.5 & 3end{bmatrix}$

Value: $tilde{X} = begin{bmatrix}1 & x_1 & 2 & x_2 & 3 & x_3 & 4end{bmatrix}$

$x_1, x_2, x_3$ is determined by your interpolation functions $h_{nn}(delta)$ and $h_{lin}(delta)$. Here $delta$ is time and both functions define intervals. Each value is given by $S(delta) = sum_{i=0}^{n-1} X_i cdot h_{nn}(delta-i)$. For nearest neighbor change the interval to $0 leq delta < 1$. Then $S(0) = S(0.5) = X_0 cdot 1 + X_1 cdot 0 + cdots = X_0$. See also Cardinal B-splines.

While downsampling by $M$ requires strided convolution $$y[n] = sum_k x[nM - k]h[k]$$

Upsampling needs fractionally strided convolution which is also called transposed convolution (see stackexchange)

$$y[j + nM] = sum_k x[n-k]h[j+kM] text{ and } j = 0, dots, M-1$$

Transposed convolution with kernel size 3, stride 2 and padding 1 is equivalent to inserting 1 zero between inputs, pad by 1 and stride 1.

The kernel is $begin{bmatrix}1 & 1 & 0end{bmatrix}$ or $begin{bmatrix}0 & 1 & 1end{bmatrix}$ (either cross correlation or convolution) for nearest-neighbor interpolation (to double frequency):

from torch.nn import ConvTranspose1d
import torch
import numpy as np

def interpolate_nn(X):
  X = torch.from_numpy(X)
  with torch.no_grad():
    op = ConvTranspose1d(in_channels=1, out_channels=1,
                         kernel_size=3, stride=2,
                         bias=False, dilation=1, padding=1)
    op.weight.data = torch.tensor([0, 1, 1]).view(1, 1, -1).float()

    return op(X.view(1, 1, -1).float()).numpy().flatten()

X = np.array([1, 2, 3, 4])
print(interpolate_nn(X))

The result is [1. 1. 2. 2. 3. 3. 4.]

For linear interpolation use $begin{bmatrix}0.5 & 1 & 0.5end{bmatrix}$. The result is [1. 1.5 2. 2.5 3. 3.5 4. ]

Compare it with your $h_{lin}(delta)$:

$begin{align*} S(0) &= X_0h_{lin}(0 - 0) + X_1h_{lin}(0 - 1) + cdots = X_0(1 - |0|) = X_0\ S(0.5) &= X_0h_{lin}(0.5 - 0) + X_1h_{lin}(0.5 - 1) + cdots = 0.5X_0 + 0.5X_1\ S(1) &= X_0h_{lin}(1 - 0) + X_1h_{lin}(1 - 1) + cdots = 1X_1\ vdots end{align*}$

Answered by displayname on November 5, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP