Difference Equations

“…I believe that the theory that space is continuous is wrong… I rather suspect that the simple ideas of geometry, extended down into infinitely small space, are wrong.” Richard Feynman, The Character of Physical Law, pp. 166-7.

Suppose

Since , if the time step, Δt, is small then it is approximately true that


Think of Δt as the interval between data collection events. So we have a difference equation system:

This is roughly Euler’s (pronounced “oiler’s”) method.

Example from Lynch’s system 10.7:

Original system is

(1)

Suppose you don’t know the parameters (1’s, 2’s, and 3’s) in the functions above, but you do know the form of the equation, i.e., you know

(2)

and you want to statistically fit (2) to data that has been collect at intervals of Δt. Using the results above, we construct the difference equations

(3)

Distributing and combining terms, this is the same as

(4)

It is easy to “play” with a system like this in Excel. (“Predator Prey example from 4 point 7.xls” in Lynch.)

The fx line from the Excel image above gives the formula defining B3, the second x term, in terms of the previous x and y together with the time step in column H. With two clicks in the bottom right corner of B3, that formula can be applied all the way down the B column. With the same sort of foumula, you can generate a column of y terms.

It is important to make as small (“short”) as possible. To illustrate, I solve for the solution to the differential equations in (1) and get the figure below. Notice the limit cycle.

If I use the difference equation (3) with the exact same parameters and a time step of 0.3, I get the next figure. (The “dots” are the discrete values obtained with the differential equation.

But if I use a time step of 0.01, I get:

So it looks like small Δt give a difference equation that is closer to the differential equation. (“difference equation comparison with differential.nb”)

The payoff to this is that you can now do a least squares regression of x(t) on x(t-1), y(t-1), x(t-1)3, and x(t-1)Y(t-1)2 from equations in (4) to get ordinary least squares estimates of parameters A, B, C, and D. Similarly, we can estimate the parameters for y(t) for the second equation from (4) and get estimates for E, F, G, and H. (Δt, your sampling time interval, will be known going in.)

When you run these regressions, the parameters given to you by the OLS package will be –Δt·A, (1 + B)Δt, C·Δt, and D·Δt, so some minor algebra (using the time step Δt) will be needed to get A, B, C, and D—if you want to use those in the differential equation in (2). Many researchers just form the problem directly as a difference equation problem, and don’t bother either deriving it all from differential equations or going back from the difference equation to the differential equation

Here’s a check without randomness in Minitab. Notice that we get the parameters exactly and that t-ratios are infinite with R2 = 100%.

x = -0.000000 + 1.01 xt-1 - 0.0100 yt-1 - 0.0200 x^3 - 0.0300 x y^2

800 cases used 1 cases contain missing values

Predictor Coef Stdev t-ratio p

Constant -0.00000000 0.00000000 * *

xt-1 1.01000 0.00000 * *

yt-1 -0.0100000 0.0000000 * *

x^3 -0.0200000 0.0000000 * *

x y^2 -0.0300000 0.0000000 * *

s = 0 R-sq = 100.0% R-sq(adj) = 100.0%

y =0.000000 + 0.0100 xt-1 + 1.01 yt-1 - 0.0200 y x^2 - 0.0300 y^3

800 cases used 1 cases contain missing values

Predictor Coef Stdev t-ratio p

Constant 0.00000000 0.00000000 * *

xt-1 0.0100000 0.0000000 * *

yt-1 1.01000 0.00000 * *

y x^2 -0.0200000 0.0000000 * *

y^3 -0.0300000 0.0000000 * *

s = 0 R-sq = 100.0% R-sq(adj) = 100.0%

Now suppose “nature” uses the true x’s and y’s but there is noise in our measurements of the output of x(t) and y(t). The outputs with errors are called x + ex and y + ey.

x+ex = - 0.00811 yt-1 + 1.00 xt-1 - 0.0197 x^3 - 0.0333 x y^2

800 cases used 1 cases contain missing values

Predictor Coef Stdev t-ratio p

Noconstant

yt-1 -0.008108 0.003378 -2.40 0.017

xt-1 1.00323 0.00667 150.33 0.000

x^3 -0.019666 0.006227 -3.16 0.002

x y^2 -0.03330 0.03521 -0.95 0.344

y+ey = 0.00772 xt-1 + 1.01 yt-1 - 0.0393 y x^2 - 0.0185 y^3

800 cases used 1 cases contain missing values

Predictor Coef Stdev t-ratio p

Noconstant

xt-1 0.007723 0.003912 1.97 0.049

yt-1 1.00516 0.02957 33.99 0.000

y x^2 -0.03931 0.05510 -0.71 0.476

y^3 -0.01854 0.08151 -0.23 0.820

Actual + and Fit o from discrete equation and minitab

The noisier the data is, the harder it is to make decent estimates of the parameters (naturally). So, like everything else in stat, this isn’t foolproof.

Here’s a picture from Mathematica. The dots are the synthetic data, and the sliders let us change the parameters while watching the fixed data in the background. Here I’ve simplified equation system (2) to be

See Lynch’s discussion of Henon map for other 2-variable difference equation example.

“difference equations.doc” same name in Minitab