EE 504 : Adaptive Signal Processing

Tutorial 2

KG/IITM Sept. 2007

Linear MMSE Theory

  1. An uniform, real, i.i.d. sequence {d(k)} is with E[d2(k)] = 1.0 is filtered by an LTI system H(z)=1 – 0.5z-1 + 0.25z-3, and the resultant output is corrupted by a coloured noise sequence {v(k)} with variance v2=0.3, to finally yield the measurements {u(k)}. The noise samples {v(k)} are defined by the convolution of an AWGN sequence {n(k)} with the colouring filter H(z)=1 + 0.8z-1. Assume further that {d(k)} and {v(k)} are mutually uncorrelated. It is desired to find the Wiener filter wMMSE of order M that will minimize E[e2(k)] where e(k) = d(k–Δ)–y(k), where y(k)= wTu(k).

(a)Find the concise values of the 3x3 auto-correlation find Ruu (i.e., for M=3).

(b)Find the values of the 3x1 cross-correlation vector pud =E[u(k) d(k–Δ)] for (i) Δ=0 (ii) Δ=1, and (iii) Δ=3

(c)Now consider M=2. What will be resultant 2-tap linear MMSE estimator wMMSE = [w0 w1]T ? Use Δ=1.

(d)Get expressions for the minimum MSE Jminwhich is obtained by substituting the MMSE filter coefficients into the expression J(M) = E[e2(k)] for M=2. Here J(M) implies the solution which uses Mth order linear estimation. Compare the results for various choices for “decoding delay” Δ. Which choice gives the lowest Jmin? Why?

  1. IIR Channel: Consider the linear MMSE estimation problem below where the desired response d(k)=I(k), and all are real signals. Here, G(z) is an infinite impulse response (IIR) transfer function, given by G(z) = 1 / (1-0.9z-1) and {I(k)} and {v(k)} are mutually uncorrelated with I2 = 1.0.

(a)If the Wiener filter wMMSE is to have an order M=2, find the Wiener solution wMMSE = [w0 w1]T for V2 = 0.4.

(b)If now V2 = 0, what will be the new Wiener solution?

  1. A WSS process {u(n)} is to be filtered by a 2-tap estimator w so as to minimize E[y2(n)] where y(n)= wTu(n) subject to the constraint that wTg=1, where the “desired gain vector” g= [1 –1]T .
  2. Given that use the method of Legrange Multipliers to determine w. What is the value of the Legrangian  ?
  3. If instead, the constraint was changed to wTw=1, can you specify (all the ) solutions for w? What will be the new value(s) of the Legrangian?

Steepest Descent Algorithm

  1. Given auto-correlation values r(0)=3.0 and r(1)=r(-1)=1.0, and the cross-correlation vector p=[1 0]T, the SDA is used with gain constant to estimate iteratively the Wiener solution.
  2. Starting with w(0)=[0 0]T, perform the SDA iterations (hint: cleverly!) to find w(25).
  3. For w(0)=[0 0]T, what value(s) of  will give the overall fastest convergence of w(n) to wMMSE ?
  1. From S. Haykin “Adaptive Filter Theory” 4th Ed., chapter-4, pp. 228-230, Pbms.# 1, 2, 3, 4, 5, 7, 8, 9, 10, 12, 13*, 15.

LMS Algorithm

  1. Starting from the weight update equation of the LMS algorithm from time n to n+1 (as discussed in class), show that where the estimation error produced by the optimum (MMSE) filter of order M is given by .
  1. In a modified LMS algorithm, the gradient is defined from the instantaneous cost function

(1.1)

and error .

  1. Specify the stochastic gradient descent based weight update equation for this “leaky” LMS algorithm (with gain constant taken to be  ).
  2. The desired signal is WSS and follows a regressive model specified by . Defining the weight error vector as and the result in part (a), find and specify the “missing term” in the expression below

(1.2)

  1. Consider a channel equalization problem (inverse filtering example), where the i.i.d, bipolar transmitted data sequence passes through channel H(z) with frequency response as shown in the figure below. At this channel output, noise v(n) with variance v2 = 0.04 gets added to finally yield the measurements u(n).
  1. Draw the power spectral density of u(n), and obtain an estimate of r(0)=E[|u(n)|2].
  2. Give estimates of the maximum and minimum eigenvalues of the correlation matrix Ruu.
  3. When the (conventional) LMS algorithm is used on the linear equalizer of order M=20, what is the value of gain constant  that will result in only a 10% misadjustment?
  4. Obtain the maximum and minimum time-constants of the algoithm, and roughly plot a typical learning curve.
  1. Consider the sign-data (or signed-regressor) LMS algorithm discussed in class. Using Price’s theorem (see Papoulis,1991), we can show that if x and y are a pair of zero-mean, jointly Gaussian rv’s, then it can be shown that

(1.3)

Using this result, show the following:

(1.4)

  1. Show that a stochastic gradient descent rule to achieve the Least Mean Absolute (LMA) estimator to minimize the cost function will result in the sign-error LMS discussed in class.
  1. From S. Haykin “Adaptive Filter Theory” 4th Ed., chapter-5, pp. 312-316, Pbms.# 1, 2, 3, 5, 6*, 7, 8*, 9, 10, 12, 13*, 14, 15, 16, and 19*.

K. Giridhar, Dept. of Electrical Engineering, IIT Madras, Sept.2007

1