Computational Science Asked by Venkataram Sivaram on August 12, 2020
I asked this question on the Computer Science stack exchange (https://cs.stackexchange.com/questions/128710/faster-computation-of-ke-x-h2), but it appears that it is more appropriate in Computational Science stack.
Essentially, I want to compute $$f(x) =sum^n_{i = 0} k_ie^{-(x – h_i)^2},$$ where $n geq 0$ and $k$ and $h$ are both real numbers, for various $x.$ On average, I would expect $x$ to lie between the minimum and maximum $h_i,$ $x in (epsilon + min h_i, epsilon + max h_i).$
I want to compute this method without having to repeatedly call $exp(x).$ Is there a way to compress this series?
If it boils down to approximating $exp(x),$ then I would like to note that polynomial approximations will not work.
Depending on how large $n$ can get and how many evaluation points $x$ you wish to use, this summation problem is well-suited to the use of fast multipole methods (FMMs); for instance, see the black-box FMM, which only requires you to tell it what kernel function you want to use. In your case, it's a simple Gaussian kernel.
Answered by smh on August 12, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP