Data Science Asked by Mikhail Genkin on June 17, 2021
In my research, I want to use a meaningful and computationally tractable distance between two time-dependent probability distributions. For stationary distributions, several distance measures are used, such as KL divergence, JS divergence, Wasserstein distance, and so on.
Is anyone aware of a similar thing for two time-dependent distributions p1(x,t) and p2(x,t)?
In my problem, both distributions are the solutions of non-stationary Fokker-Planck equation,
dp1/dt = (-d/dx F1(x) + d2/dx2 D1(x) ) p1(x,t), p1(0,t)=p10(x)
dp1/dt = (-d/dx F2(x) + d2/dx2 D2(x) ) p2(x,t), p2(0,t)=p20(x)
Where the drift forces F1(x), F2(x), diffusion coefficients D1 and D2, and initial probability distributions p10(x) and p20(x) are computed from data. So, ideally, I want some kind of norm that can be expressed through the parameters of the Langevin equation.
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP