differential equations 111/01/181/5

Integration of Ordinary Differential Equations

Press - Chapter 15 first edition, Chapter 16 second edition
Note the section headings -- Runge-Kutta, Adaptive-Step size Control, Modified Midpoint Method, Richardson Extrapolation and ..., Second-Order Conservative Equations, Stiff sets of Equations.
Euler’s method is so simple that you need only remember the name.
Runge-Kutta is a method of using a succession of points to estimate derivatives. It enables one to get convergence of order h5-6 without actually taking any derivatives. It works for mathematical functions that are easy to calculate. It should be used for calculating very accurate estimates of the types of functions that are found to 10 digits in tables. The adaptive size control increases the efficiency of an already efficient method but the changing of step sizes introduces errors and problems.
The modified midpoint method is the closest Press comes to my favorite scheme (sketched at the end of this lecture), which is mid-point with predictor corrector. Richardson extrapolation can be used to increase the nominal convergence rate from h2 to h4..\integration\Richardson.htm, My recommendation is to wait until the end rather than trying to do it intermediately as in Press. Second-order conservative equations are our stock in trade and the Numerov method is very frequently used to reduce the number of operations when the first derivative is not wanted. I usually prefer the extra information given by also calculating the first derivative separately.

If you are attempting to solve the equation numerically, it is probably stiff. Press’s section on Stiff differential equations is must reading. The predictor corrector part of mid point trap is the beginnings of a solution to this problem. The Schroedinger equation in one dimension is a stiff differential equation. If you start at r=0 in the hydrogen atom ground state equation and integrate outward with 16 digit precision it blows up by r=16 aB. This will be discussed in much more detail.
Following Press sort of (integration of Ordinary Differential Equations)

The standard equations of physics are second order differential equations of the form

Eqn 1

There are three functions of x here

These can be rewritten as

Eqn 2

The function f (x,y(x),z(x)) is explicitly given in terms of y(x) and z(x). The function z(x) is the integral

The function y(x) is

This reduces a second order differential equation to two simultaneous first order differential equations that are to be solved together. Note that if there were no y or z in f that the solution for z(t) would be. The reason that this is a differential equation rather than simple quadrature is due to that fact that it is necessary to find y and z before evaluating f.

Deviating from Press Euler’s method

The simplest method for solving these equations is Euler’s method.

x=x0

y=y0

z=z0

h=.001 (think small)

5 y=y+h*z error of order h2

z=z0+h*(r(x,y0,z)-q(x,y0,z)) error of order h2

y0=y

write(1,*)x,y,z

x=x+h

if(x.lt.x1)goto 5

stop

At each step, an error of h2 is made so that after 1/h steps, the total error is of order h. One can plot this and extrapolate it out

Eqn 3

Solve for y(0).

Eqn 4

For the case shown h1=2 h2 so that

Eqn 5

This is to say that the correct answer moves away from the second answer by the difference between the two answers which is shown as e in the above figure.
The values at the beginning of the interval are the only ones used to estimate the entire interval. In terms of integrals we have said.

Eqn 6

Eqn 7

Note that I have made no attempt to use the final y in finding z in the above code. The method above which is the simplest possible, is the recommended method for the first pass. If it extrapolates to an answer that you want to publish you will need to remember that it is called Euler’s method and be sure to mention that you extrapolated to arrive at the final answer, otherwise the referee will understand how crudely you solved the problem and object.

A bit more analytic accuracy

The integrand f can be expanded to one higher order as

Eqn 8

and likewise the integrand z(x’) can be expanded
Eqn 9

this in turn can be integrated to yield

Eqn 10
Then

Eqn 11

This actually works very well, if someone is willing to take the partials of the function f with respect to its variables. ..\WaveFunction\1-d\DiffEq\SeqnIn1d.docx.Note that in general evaluating the partials on the computer takes no more time than evaluating f itself making the method above two to three times faster than the Runge-Kutta algorithm that we are about to derive.

Runge-Kutta

The notion is to evaluate at different values of x, y, and z in the interval between x0 and x, and to make a weighted average of these evaluations expand to be the same as the analytic expansion given above. This allows us to get the benefit of the derivative expansion without actually taking derivatives.

Eqn 12

which will equal the analytic expansion for a=b=h/2; p=q=s=1.

It will also work for a=0,b=1,p=q=s=1/2.

Eqn 13

In the case a=b=h/2, p=q=s=1, we have ==h/2.

In the case a=0,b=h,p=q=s=1/2, we have =0,=h. The first case amounts to end point sampling while the second is midpoint sampling.

By using more points and more algebra, the method can be extended to higher and higher powers of h. The method coded in Press evaluates f at the end points and a group of mid points to get an error of order h5. Unfortunately he does only a single equation.

Stiff differential equations

Consider the rather simple equation

Eqn 14

with c>0. The analytic solution to this is

Eqn 15

as can easily be seen by taking the derivative. Euler’s method yields

Eqn 16

Here we see the origin of an oscillation. If a part of the problem is such that (1-ch) is less than -1, the continual multiplying by this will produce an oscillating solution as part of the difference equations. Now to show how very slight changes can have very large results, use the advanced value of y as the derivative, rather than the current value so that.

Eqn 17

this becomes

Eqn 18

or

Eqn 19

which for c>0 is stable for all step sizes. Note that c<0 corresponds to y=exp(cx) which not only is not stable, it is not supposed to be. The problem is that we have an implicit equation to solve, since we are using the new value of y in f, that is what we have just written in more general terms is

this can be solved by iteration. This is the implicit method for solving differential equations which in its partial differential equation form is one of the biggest consumers of main frame and super-computer time today.

In terms of integrals we see that

Eqn 20

and that both the Euler method and the implicit method make an error of order h2 in approximating the integral. A much better approximation is

Eqn 21

In terms of the simple function above with f=-cy, this becomes

Eqn 22

or

Eqn 23

And while this asymptotes to -1 for very large values of ch, it never actually goes unstable. This is similar to simply averaging the explicit and implicit method. In partial differential equationsthis is know as the Crank-Nicholson scheme.

When implemented as a differential equation, one normally lets Euler’s method give the first guess so that

then the iteration loop is

nloop=0

5

nloop=nloop+1

if(nloop.gt.10)then

print*,’ trouble with x=’,x0

stop

endif

yp1=yp2

if(abs(yp2-yp1).gt.1d-12)goto 5

Note that the error in yp1 is of order h2 so that it only contributes the current h3 error to yp2