ARE/ECN 240C Time Series Analysis / Professor Òscar Jordà
Fall 2002 / Economics, U.C. Davis

Problem Set 2 – Solutions

Instructions

Part I – Analytical Questions

Problem 1: Consider the AR(2) process

(a)  Show that the AR(2) is stable/stationary and calculate its autocovariance and autocorrelation function. Also, calculate the unconditional mean of the process. Indicate what the ACF and the PACF of this series looks like. Do not calculate this by hand. Rather use your favorite econometric software to find the answer for up to 12 lags.

(b)  Determine the MA(∞) representation of this AR(2). This will determine the sequence of dynamic multipliers required for the impulse response function. Display these coefficients graphically (up to 12 lags) using your favorite econometric software.

(c)  Determine the forecasts and forecast error variances for the first 12 period-ahead forecasts. Use your favorite econometric software package to answer this question.

Solution

(a)  Stability: Check the roots of 1 – 1.1z + 0.18z2 = 0. They are: z = 5, and 1.111. Both larger than 1 in absolute value therefore the AR(2) is stable. This is confirmed by the plots of the ACF and PACF below.

(b) 

(c)  Given yt and yt-1, the h-periods ahead forecast is easily calculated by generating observations yt+h recursively. The forecast error variance can be easily calculated from the MA(inf) representation since the appropriate sum of squared terms times the variance of the residuals will give you the right FEV.

Problem 2: If

/ with

with e and u independent of each other. Then:

(a)  What is the process for yt?

(b)  Give conditions to ensure yt is covariance stationary and invertible.

(c)  Find the long-horizon forecast for yt and its variance.

Solution

(a) Notice:

which is an ARMA(1,2).

(b) Stationarity only depends on b so the usual condition applies here: |b| < 1. For invertibility, we need to ensure that the roots are outside the unit circle. Of course, the roots for this problem are trivial since they are . Since stationarity requires that |b| < 1 already, we only need .

(c) The long-run horizon for a stationary model is the unconditional mean which for this problem is just m. The long-run forecast error variance is also easy to calculate since it is the unconditional variance of y. This variance can be calculated as follows. Notice that:

.

Noting that the e are serially independent and that e and u are independent, then

Problem 3: Suppose

(a)  Derive the conditional log-likelihood for y0 = 0.

Solution: Omitting constants,

(b)  Derive the score. Assume s2 is known. Solution:

Noting that , the expression of the score simplifies to

(c)  Derive the estimator of the information matrix. Assume s2 is known.

Solution: After some tedious algebra,

H=

Note that since s2 is assumed known, we have not needed to derive the score nor the cross products of the information matrix, which in this case boils down to a second derivative.

(d)  Derive the LM test for H0: b = 0. Assume s2 is known.

Solution: Under the null, st = 1 and b = 0. Therefore the information matrix estimate of –E(H) is

Note that the score evaluated at the null is

so that the statistic is:

(e)  Would it matter if s2 were unknown?

Solution:

If s2 were unknown then we would have a vector of scores and a 2 by 2 hessian matrix. Although s2 may appear not identified, this is not the case since the expression for st does not contain a constant term.


Problem 4: Suppose

/ with

Describe an easy approach to testing the hypothesis H0: a = 0 and show how you would conduct the test.

Solution

There are many ways of answering this question, obviously. I have chosen the option that would be most easily implemented in any computer software that runs linear regression.

Note:

Therefore, the test can be performed as follows:

Step 1: Estimate the regression and save the residuals for the next step.

Step 2: Use the residuals as a dependent variable in the regression . The TR2 of this regression will be distributed as a

Problem 5: Consider the following stationary data generation process for a random variable yt

yt = byt-1 + et et ~ N(0,1) i.i.d.

with |b| < 1, and y0 ~ N(0, (1 - b2)).

(a)  Obtain the population mean, variance, autocovariances and autocorrelations.

since {yt} is stationary. Further, notice that E(et4) = 3. Noting only the nonzero terms:

Finally,

(b)  Derive

(i)

(ii)

(iii)

(iv) Note: . Derive

(c)  Derive the limiting distribution of the sample mean. Hint: {yt} is not i.i.d. but it is ergodic for the mean.

Since yt is stationary, the only difficulty in the derivation of the distribution of the sample mean is that {yt} is not an i.i.d. process, although it is ergodic. A general strong law of large numbers and a central limit theorem can be applied to such processes so that the sample mean converges a.s. to the population mean, which is zero, and is normally distributed around zero (i.e. using the central limit theorem for martingale difference sequences we discussed in class). However, the problem can also be solved from first principles as follows:

As . However

Next, is a linear function of independent, normally distributed components {et}, with a finite, non-zero variance in large samples, so that:

(d)  Obtain the limiting distribution of the least squares estimator of b. Hint: calculate and then derive its distribution.

Using the above results and Slutsky’s theorem:

Next, from Mann-Wald

Using Cramer’s theorem, we thus have

Problem 6: Consider the AR(1) model of the previous exercise but suppose instead that The conditional density for observation t is therefore

Let be the unrestricted MLE estimate of θ = (b, s2)’ and let be the restricted MLE estimate subject to the constraint Rb = c, where R and c are known constants. Also, let

(a)  Verify that minimizes the sum of squared residuals so that if it is the same as the OLS estimator.

This problem is a lot simpler if we proceed with the conditional likelihood, which is

and can be viewed as a QMLE problem. The first order conditions are,

which is the OLS estimator

(b)  Verify that minimizes the sum of squared residuals subject to Rb = c so that it is the restricted LS estimator.

The restricted likelihood is very simple and takes the form,

where the lagrange multiplier is rescaled by s2 for convenience and without loss of generality. The first order conditions are,

from which

Of course, in this example the constraint can be trivially imposed in the problem and it would not require any estimation. The restricted least squares estimator would be calculated by minimizing,

whose first order conditions can be shown to coincide with those calculated via restricted maximum likelihood.

(c)  What assumptions did you make on f(y1|b, s) assuming y0 is unobserved? Discuss the importance of any simplifying assumptions and the importance that the parameter b has if its value were unrestricted.

We have been using the conditional likelihood for simplicity. However, if we had used the exact likelihood, the sensible assumption regarding f(y1|b, s) is that it is distributed N(0, s2/(1 - b2)). Note, however, that the exact likelihood is only valid if |b| < 1, otherwise, the log-likelihood becomes unbounded for b = 1.

(d)  Let

Show that

Where and . Hint: Show that and .

To show, all that you beed is to compute the first order conditions of the likelihood and the restricted likelihood with respect to s2. Substitution of into QT(θ) trivially delivers the two expressions for

(e) Verify that the given above, although not the same as is consistent for –E[H(yt; θ0)]. Verify that , although not the same as , is consistent for –E[H(yt; θ0)]. Hint: You may assume that is consistent for θ0 and is consistent for θ0 under the null. This should make proving consistency of easier.

The easiest approach is to calculate the hessian for the conditional and the restricted likelihoods. Using the hint, it then becomes trivial to show that both

(f)  Show that the Wald, LM, and LR statistics, using and can be written as

No secrets here, just direct application of the formulas.

(g) Show that these three statistics can also be written as

This involves simple algebraic manipulations.

Problem 7: Consider the stochastic process {xt} which describes the number of trades per interval of time of a particular stock. Thus, xt is integer-valued and non-negative. The Poisson distribution is commonly used to describe this type of process. Its density has the form:

with conditional mean

Answer the following questions:

(a)  Under what parametric restrictions of a and b will {xt} be stationary?

This is a tricky question since we have a specification for the conditional mean directly rather than for {xt}. However, it can be shown that the model is explosive for b > 0.

(b)  Write down the log-likelihood function for this problem conditional on x1.

(c)  Compute the first order conditions and obtain the estimators for a and b.

(d)  Suppose xt is not Poisson distributed. Are a and b consistently estimated by the conditional log-likelihood in (b)?

The Poisson distribution is an example of linear exponential function whose QMLE properties have been established by Gourieroux, Momfort and Trognon (1984). Check Cameron’s (1998) book for the references.

(e)  Compute the one-step ahead forecast xt+1|t. Next, describe the Monte Carlo exercise that would allow you to compute the two-step ahead forecast. Finally, describe what approach would you take if the data were not Poisson distributed but you wanted to produce multi-step ahead forecasts with the conditional mean estimated in (b).

The one step ahead forecast xt+1|t is lt+1 = exp(a + bxt).

To compute the two-step ahead forecast note that we need to compute. To do this, we will draw from a Poisson distribution with mean parameter lt+1 = exp(a + bxt). Suppose you draw n times from this Poisson, you will have n values for which you can then plug into the expression of the conditional mean to compute the forecast as:

Computing the h-step ahead forecast would consist on drawing n times from a Poisson distribution whose conditional mean is given by the (h – 1)-step ahead forecast and calculating the average as is done above.

Note that as long as the conditional mean is correctly specified, multi-step ahead forecasts computed as described above would be consistent (albeit not efficient if the distribution is not Poisson).

Problem 8: Let {yt} be a stationary zero-mean time series. Define

(a)  Express the autocovariance functions for {xt} and {wt} in terms of the autocovariance function of {yt}. (Note: there is no assumption that {yt} is white noise, only covariance stationary).

Solution:

(b)  Show that {xt} and {wt} have the same autocorrelation functions.

Solution:

From (a),

(c)  Show that the process satisfies the difference equation

Solution:

Notice that (1 – 2.5L) is non-invertible. However

Hence,

Problem 9: Find the largest values for for an MA(2) model.

Solution:

Notice that for an MA(2)

Maximizing r1 with respect to θ1 and θ2 delivers so that the maximum first order autocorrelation is . Similarly, the maximum second order autocorrelation is achieved for and a maximum second order autocorrelation of ½.

Problem 10: Consider an AR(1) process, such as

where the process started at t = 0 with the initial value y0. Suppose |f| < 1 and that y0 is uncorrelated with the subsequent white noise process (e1, e2, …).

(a)  Write the variance of yt as a function of s2 (= V(e) ); V(y0), and f. Hint: by successive substitution show

Solution:

Using the hint,

(b)  Let m and {gj} be the mean and the autocovariances of the covariance stationary AR(1) process, so that

Show that:

(i)

(ii)

(iii)

Solution:

16