TransWikia.com

Computationally Feasible Wavelet transform algorithms for 1-D data with many samples

Computational Science Asked by Manuel Jenkin on May 13, 2021

I am doing multi resolution analysis on a 1-d Dataset with large amount of samples (few millions). Currently I am experimenting with pywavelets in python but it gets incredibly intensive on computation after a quarter million samples and blows up as I increase the sample size (I am not sure of the complexity order, but I believe it is O(n^2)) and its near impossible with more than a couple million samples. I run out of memory limits as well beyond a certain limit (easily runs beyond 10GB just after a million samples). Possible to do the tasks iteratively without using the entire process in memory?

I am looking for alternatives that can give me a more computationally feasible solution. I am currently using Continuous wavelet transform but I would also be okay with Discrete wavelet transform for the most part. As per my understanding, DWT skips points as the wavelet stretches (becomes lower in frequency), so I will have lesser coefficient points for lower frequency wavelet decompositions while CWT has equal coefficient points for each of the daughter (and mother) wavelet decompositions. Please do correct me if I’m wrong (took reference from another stackexchange post: https://dsp.stackexchange.com/questions/8009/using-continuous-verses-discrete-wavelet-transform-in-digital-applications). For DWT, I came across algorithms called Fast Wavelet transform (analogous to what fast Fourier transform is to Discrete Fourier transform, and the filter bank theory). I came across one open source fast wavelet transform library – https://ltfat.github.io/doc/wavelets/fwt_code.html . But I am unable to see the types of wavelets supported, or is Fast wavelet transform a category in itself?

I would like to know more options that I could try (open to any language or toolkit as long as it is opensource). Also open to GPU compute (openCL, Vulkan, any open source or even CUDA as I have a Nvidia Card). Same for multithreading the task (I feel this would be a feasible approach, for even CWT considering I could run each convolution in a separate thread, except for the memory issues). If there are any in progress projects I could run as beta test, and possibly also contribute to the code, I’ll be happy with that as well.

The signal is very random and underlying properties not so well known. So I will be experimenting with different wavelets for analysis purposes, so I would be having to do multiple iterations and even visualization (interpolation and intensity linearity) would also need rigorous analysis. Currently I have tried gaussian derivative and Mexican hat wavelets in python. On a side note, I would also be interested to know resources on the mathematical/signal resolution or optimal signal properties of different wavelets. I am able to correlate a few (Mexican hat wavelets for impulsive signals) but would like to explore more mathematically (would help me choose a more optimal wavelet quicker and save computation iterations)

Side Note: also curious to know what algorithm pywavelets is based on.

Edit: came across a useful resource : https://github.com/PyWavelets/pywt/issues/371 . Two libraries, PDWT working on CUDA, and libdwt that supports SSE (SIMD instructions on CPU).

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP