TransWikia.com

Numerical minimization of the action in python

Computational Science Asked by murt_ on July 21, 2021

I want to find the trajectory $x(t)$ which minimizes the action $S = int_{t_i}^{t_f} L(x(t), dot{x}(t)) mathrm{d}t$ numerically.

I am trying to do it by discretizing the action so it is more of a function which takes in a vector rather than being a functional. More precisely, given an initial guess for the trajectory ${x^{‘}_{i}}$ I am trying to find the trajectory ${x_i}$ that minimizes the sum $sum_{i=0}^{N-1} frac{m}{2 Delta t}(x_{i+1} – x_i)^2 – frac{k}{2} x_{i}^{2} Delta t$ (which is the discretized version of the action for the SHO) Then I am using scipy’s downward simplex method/nelder-mead minimize function to do it, however it is not working correctly.

I am following a reference which says that the initial guess for the trajectory can be the initial position $x_0$ and the rest of the positions can be the initial position plus a variation which is around the length scale of the problem $x_0 + delta_j, j =1, dots, N-1$. (pg 39 here). However it still doesn’t work. I tried to make the variations smaller but still doesn’t work. It seems like to make it work I need to feed it a decent approximation of the actual trajectory, but that kinds of defeats the purpose of the exercise. Here is the code:

import numpy as np
from scipy.optimize import minimize
import matplotlib.pyplot as plt

%matplotlib inline

t_i = 0
t_f = 7
N = 100
delta_t = (t_f - t_i)/N
m = 1
k = 1

def action(x, N, m, delta_t, k):
    """
    This function will take in an array of positions x_i w/ x_i = x(t_i). Then it computes the action for this trajectory.
    We want to, given an initial position and a guess for the rest of the components, find the set of x_i which minimizes
    the action, and return it. Scipy 'optimize' does it. See the first example in the documentation for a similar problem.
    """
    tot = np.array([])
    for i in range (0,N-1):
            np.append(tot,((m/(2*delta_t))*((x[i+1]-x[i])**2)-((k*delta_t)/2)*((x[i])**2)))
    return np.sum(tot)

x_0 = np.ones(100)
var = np.random.rand(100)
x0 = x_0 + var

res = minimize(action, x0, args=(N, m, delta_t, k), method='Nelder-Mead',options={'disp': True})

time = np.linspace(0,7,100)

fig = plt.figure()
axes = fig.add_axes([0,0,1.25,1])
axes.plot(time,res.x)

2 Answers

I will point out that in order to solve any functional minimization problem, you need to specify initial conditions. In this case the minimization corresponds to a 2nd order system and so you must specify the initial and end values of the motion, or the initial value and the first derivative (or any other pair of conditions you like really).

Without this, it's not guaranteed that you can find an extrema, which brings me to the next point: in general the equations of motion associated to the variation of an action only determine an extrema, not necessarily a minima. So this point in configuration space may be a minima, maxima, or even a saddle.

To add to the first point, let me point out this PSE post which computes the value of the action for the SHO at an extrema (they fix the initial and final positions). Note that for certain values of the end time, $t$, the value of the action can become arbitrarily negative as a function of the start/end positions. Hence, for some values of $t$, if you were to leave the start and end positions unfixed (not fixing your initial values), the minimizer will give you weird answers because the truth is that the value of the action goes to negative infinity as the start/end positions go to infinity.

All of this is really just to point out the necessity of boundary conditions in forming a well-posed variational problem.

Something which might also be of interest if you have access to a more recent copy of Jackson's E&M book: he put some effort into including basic numerical approaches. I believe one of these is essentially the method for doing precisely as you have proposed, but for a static electric field. In field theory the need for boundary conditions, I think, becomes even more apparent in numerical applications whereas it can sometimes be easy to forget them in ODEs.

Answered by Richard Myers on July 21, 2021

I won't complain about the formalism - I don't know if it will work.

At the moment, however, it definetely does not work because your action is always 0: np.append does not append the element, it returns an array with the element appended. So your tot array is always empty:

change the line to:

tot=np.append(tot,  .... )

or use an array-like formalism (much faster):

def action(x, N, m, delta_t, k):

    f= (m/2./delta_t)*(np.diff(x))**2-k*delta_t/2*(np.array(x)[:-1])**2
    return f.sum()

It will also not converge fast - so you will have to set

options={"disp":True, "maxiter": N_iter}

in your call to minimize, with N_iter big enough for it to converge.

It seems also that you are choosing random displacements always in the same direction, I would use either

var=np.random.randn() 

[normal displacements from $x_0$] or

var = 0.5-np.random.rand(100)

to have displacements in (-0.5, 0.5).

Also, maybe the noise is pretty high? Having $1pm 1$ is a lot of noise. Maybe do $1pm 0.2$ or similar for starters?

Hope this helped.

Here is my result (input in blue, ouput in orange) with a noise of var = 0.1*(0.5-np.random.rand(100)), more steps (N=1000) and N_iter=50000 (still did not converge though.. but it's a first step for you to understand your program in depth!)

enter image description here

Answered by JalfredP on July 21, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP