Data Science Asked on November 30, 2020
Imagine you have 2 people at 2 different microphones but in the same room. Each microphone is going to pick up some sound from the other person. Is there a good neural network based approach to isolating the signals so that the sound from each microphone only captures 1 person?
I remember hearing a solution to this a few years back, but Im not sure if I remember that correctly
I ask because a similar problem was mentioned to me today. During EEG brain wave data collection, each electrode can pick up signal from multiple sources in the brain. In that world they try to isolate the sources and reduce the “noise” from other brain areas, and its common to use ICA for such a task. The problem with ICA is that the post-processing stage is very time consuming, so I’m wondering if theres a better ANN/DNN approach that could solve the problem more efficiently, or maybe with better accuracy
There is a class of Neural Networks designed specifically to "clean" observations from noise. They are Denoising Autoencoders: they are Autoencoders that learn to map a noisy signal with its clean counterpart. They are typically used to clean image or time series data, but they can potentially be applied to any task.
I don't know much about your data and your problem specifically, so I can't say how hard it'd be to gather enough training data, but it could be worth to give it a try.
Answered by Leevo on November 30, 2020
Take a look at this.
No DNN, but math, if different channels are available.
DNN were used for single channel input, but have to be trained to the signals you want to separate.
Answered by Simone Genta on November 30, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP