TransWikia.com

Is using tensorflow for Spiking neural networks a "good" idea?

Psychology & Neuroscience Asked on October 22, 2021

I recently started working with Spiking Neural Networks and was hoping for some input from others. I saw there were many libraries/platforms specifically made for working with SNN’s (Brian, PyNN, NEURON, etc…) however I was wondering if there were any tradeoffs in using plain, old popular Tensorflow for SNN implementation. Is there a reason that it is underresearched? Has anyone tried or working on this?

Thanks for any feedback anyone can offer,

One Answer

One of the key issues with spiking neural networks is that while SNN are computationally more powerful than the so called 2nd generation neural networks (artificial neural networks), they are also computationally 'much' more expensive. This means you need a lot of computational power to effectively use them, and limits their use by the wider data science community and therefore the development of the method, which is mostly stuck in research groups. Many of whom are working on the issue of efficiency, while IBM amongst others is working on the hardware issue.

Another issue for the poor adoption and development of SNNs is that, at the time of writing, they are not able to effectively conducting supervised learning. So while SNNs are good at dealing with specific tasks for which they have been hard coded, they are not generally applicable to a wider range of problems without considerable coding. By comparison ANNs are highly versatile and easily applicable to a wide range of data science problems. It has been suggested elsewhere that for SNNs to be general purpose, we would need to more accurately mimic learning in the mammalian brain in order to conduct supervised learning tasks. There are recent steps in the right direction that take a simplified approach by adding a reward process in to the NN, which in my opinion (with a background in 'reward systems') is a good start. But this is all very recent compared to ANNs, and really the 'further research is required' slogan strongly applies before SNNs are more widely adopted within both data science and computational psychology/neuroscience. Although as you can see here the TensorFlow community is starting to work on utilising SNNs. Google (TensorFlow) itself is well onboard with studying SNNs, and is one of the strongest leaders in the field of machine learning, so you should expect to see a lot more work from google here, which will spread the the wider data science community. With that said I cannot see a reason not to use TensorFlow, it is, however, worth looking into IBMs ML offerings too given there interest in developing hardware for SNNs, and how Snap ML out performs TensorFlow.

In summary, SNNs are computationally powerful in theory, but are computationally expensive compared to ANNs with current technology. SNNs also have a supervised learning problem which means that this need to be developed to allow SNNs to be used more generally, and wider adoption. This would lead to more use of SNNs and therefore more development. SNNs are likely a future of machine learning as research and technology develops, but its currently mostly limited to academic investigation and understanding.

Answered by Comte on October 22, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP