EXPERIMENTS WITH GAUSSIAN QUADRATURE IN 1 AND 2 VARIABLES.

CLARA ENUMA, ELECTRICAL ENGINEERING MAJOR AND

KIMBERLY LINTON, COMPUTER SCIENCE & MATH. MAJOR

SUNY NEW PALTZ

SPRING / SUMMER 2005.

MENTOR: DR. L. FIALKOW

Introduction

This project has to do with the approximation of integrals using Gaussian Quadrature. We first studied Gaussian quadrature on an interval [a,b], based on a program created by

R. Curto and L. Fialkow [2] and implemented in Mathematica. We also compared the Gaussian approximations to estimates using Simpson’s rule and to the Mathematica estimate NIntegrate. We next studied iterated Gaussian quadrature over a rectangular region [a,b] x [c,d], and then studied 2-dimensional iterated Gaussian quadrature over certain non-rectangular regions. Finally, we studied the problem of finding the fewest points in a cubature rule, using the moment matrix method of [5].

Acknowledgement

A big thanks goes to Deb Gould, Sarah Browne and Roderick Woods the coordinators of Alliance for Minority Participation (AMP) / C-STEP / CSEMS at SUNY New Paltz, who chose us for our mentor Dr. L. Fialkow. Dr. Fialkow began this project with us in the spring section of 2005, and continued with us in the summer section 1 of 2005.

Table Of Content.

  1. Introduction…………………………………………………………………….2
  2. Acknowledgement………………………………………………………….….3
  1. Numerical integration using Gaussian Quadrature in one variable……………5
  2. Creation of table………………………………………………………………17
  3. Experiments in 1-dimension numerical integration……………………….….18
  4. 2-dimensional iterated Gaussian Quadrature over a rectangular region………34
  5. Experiments with 2-dimensional iterated Gaussian Quadrature over a

Rectangular region……………………………………………………………38

  1. 2-dimensional Gaussian Quadrature over a non-rectangular region………….51

9.Experiments with 2-dimensional iterated Gaussian Quadrature over a

non-rectangular region…………………………………………………………53

10.The matrix extension method for constructing quadrature rules with the

fewest points……………………………………………………………………57

11.References………………………………………………………………………66
Numerical Integration:

Numerical integration is the approximation of an integral using numerical techniques; Gaussian Quadrature and Simpson’s Rule are the methods of numerical integration we studied in this project. We implemented these methods using the software tool Mathematica, where numerical integration is also implemented by a proprietary method called NIntegrate.

The Fundamental Theorem of Calculus concerning integration is that the integral of a continuous function f(x) over an interval [a,b] is given by [a,b] f(x) dx = F(b) – F(a), where F(x) is an antiderivative of f(x), i.e., F´(x) = f(x) [6]. Take as an example,

[a,b] Cos(x) dx = Sin(b) – Sin(a), where f(x) = Cos(x) and F(x) = Sin(x). In case we cannot guess an antiderivative F(x), we seek to approximate [a,b] f(x) dx using some numerical method.

Gaussian Quadrature: In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration [4],

[a,b] f(x) dx 0in wif(xi).Let n > 0 and suppose the quadrature rule has points x0,….., xn in an interval [a,b] and positive weights w0,….., wn. In the Gaussian Quadrature rule Q  Qn of size n + 1 (i.e., with n + 1 points), we require that if p(x) is any polynomial up to degree p = 2n + 1, then

 [a,b] p(x) dx = Q [p]

= 0inwip(xi)

= w0p(x0) + w1p(x1) + ……. + wnp(xn) [4]. It turns out that n + 1 points are the fewest that will give this precision. Also, in Gaussian Quadrature, the quadrature points are symmetrically distributed about the midpoint of the interval. For example with n = 2, we have the following picture:

For any continuous function f(x) defined on [a,b], we now define the Gaussian Quadrature rule Qn (f) = Q(f) 0in wif(xi)

= w0f(x0) + w1f(x1) + ……. + wnf(xn), and we will approximate [a,b] f(x) dx by Q(f). We can motivate this approximation as follows.

Suppose we are given a continuous function f(x) on a fixed interval [a,b] and we have a number  > 0. By Weierstrass’ Theorem, a polynomial p(x) could be subtracted from f(x) in such a way that the absolute value of the difference will be less than , i.e., f(x) – p(x) for every x in [a,b].

It then follows that,[a,b] f(x) dx – [a,b] p(x) dx = [a,b] f(x) – p(x) dx [a,b]f – p) x dx b - a,so if  is small, we get  f  p. Since  p = Q(p) provided that degree p  2n + 1, we can estimate f(x) dx, by Q(f) 0in wif(xi). Giving a better explanation, we have that  f  p = Q(p)  Q(f). To justify the last approximation,

Q(p) – Q(f) = wip(xi) - wif(xi)

= wi(p(xi) – f(xi)) 

0inwi(p(xi) – f (xi))

wip(xi) – f(xi) 

wi

wi

= (b-a) (since wi = wix0 = [a,b] x0 dx = b – a).

For a given small  > 0, deg p might be very large, so we might not have deg p  2n + 1 and therefore we might only have  p  Q(p). Alternatively, in our experiments, we computed Qn for larger and larger values of n until we obtained a good approximation, [a,b] f(x) dx  Qn (f).

Another numerical method is Simpson’s Rule [4, ch 5.] [6], which can be defined as follows: Simpson’s Rule: If f(x) is a continuous function, we can approximate [a,b] f(x) dx by applying Simpson’s Rule in the following way. Let n > 0. Consider the following Riemann sums that could be used to approximate [a,b] f(x) dx:

The left endpoint estimate gives:

The right endpoint estimate gives:

The midpoint estimate gives:

The trapezoid estimate gives the average of the left (Ln) endpoint and that of the right (Rn) endpoint, i.e.,

Tn = (Ln + Rn) / 2.

Now the parabolic estimate (Simpson’s Rule) gives:

which can also be written as: Sn = (2  Mn + Tn) / 3 .

In our experiments we compared Sn (f) and Qn (f) to see which method approximates

[a,b] f(x) dx more quickly as n gets larger and larger.

Analysis Of The Gaussian Program:

We used an implementation of Gaussian Quadrature based on [2] [5]. In this program, if we are given an integer n > 0, and we have an interval [a,b], then we compute points x0,……., xn within the interval and positive weights w0, …, wn which implement Gaussian Quadrature: if p(x) is a polynomial with degree p  2n + 1, then

[a,b]p(x) dx =0inwip(xi). Likewise for any continuous non-polynomial function f(x), [a,b] f(x) dx 0inwif(xi).

We illustrate the validity of the method of [2] [5] by proving its correctness for n = 2. We must find n + 1 = 3 points, x0, x1, x2, in interval [a,b], and positive weights w0, w1, w2 such that if j =[a,b] xj dx = xj+1/ (j+1)[a,b] = (bj+1 – aj+1) / (j+1) for ( j = 0, 1, 2, 3, 4, 5),

then j = 0i2wixij(0  j  5). In detail, we must solve the following system for wi > 0 and xi[a,b] (0  i  2)

0 = w01 + w11 + w21

1 = w0 x0 + w1 x1 + w2 x2

2 = w0 x02 + w1 x12 + w2 x22

3 = w0 x03 + w1 x13 + w2 x23

4 = w0 x04 + w1 x14 + w2 x24

5 = w0 x05 + w1 x05 + w2 x25

Step 1:The first step is to find a matrix H which “represents” integration of any polynomial p of degree p  2. Each polynomial p with degree p  2 is of the form

p(t) = d + et + ft2; Let p^  (d,e,f)T (T for transpose) be the vector of coefficient of p.

We want to find a matrix H  H2 (with 3 rows and columns) such that

H p^, q^ = [a,b] p(t) q(t) dt (for all polynomial p,q with degree p, degree q  2).

Therefore if H = (Hij) 0  i , j  2, then Hij = H tj^, ti^ = [a,b] ti+j dt = i+j

For example, H00 = 0

H01 = H10 = 1

H02 = H11 = H20 = 2

H12 = H21 = 3

H22 = 4

So we get H  H2 =

=

(H is a Hankel matrix, constant on cross-diagonals).

Let us denote the first column vector of H as t0 = (0, 1, 2)T, the second column vector as t1 = (1, 2, 3)T, and the third column vector as t2 = (2, 3, 4)T. We also have a vector t3 = (3, 4, 5)T

= .

STEP 2:The next step is to find vector c = (c0, c1, c2) such that H c = t3, i.e.,

.

Since Determinant [H] = (b – a)9 / 2160 ( 0, since a < b), then H is invertible.

If J = H-1, then c = Jt3 and we can write () in terms of column vectors as

c0t0+ c1t1 + c2t2= t3, where c0 = (1/20)[(a + b)(a2 + 8ab + b2)], c1 = -(3/5)(a2 + 3ab + b2),

and c2 = 3(a + b) / 2.

Consider the polynomial

f(t) = t3 – (c0 + c1t + c2t2)

In Mathematica, we can find the roots of f(t) = 0 by using Solve [f(t)  0, t] or

NSolve [f(t)  0, t]. It turns out that f(t) = 0 has exactly 3 distinct roots:

x0 =

x1 =and

x2 = .

We can show that x0, x1, x2 are in [a,b] by the following: Since x0 = (a + b) / 2, we see that it’s the average of a and b, so it has to be between a and b.

For x1, (a2 – 2ab + b2) = (b – a)2 = b – a, since b > a.

Therefore, x1 – a = (1/10)(-5a + 5b - 15 (a2 – 2ab + b2)) = (1/10)(5 - 15)(b – a) > 0 and b – x1 = (1/10)(-5a + 5b + 15 (a2 –2ab + b2)) = (1/10)(5 + 15)(b – a) > 0.

So x1 [a,b], and similarly for x2.

STEP 3:The next step is to define the Vandermonde matrix, which is the following

(n + 1) x (n + 1) matrix:

V =

=

Now Det V = .

Since (a2 – 2ab + b2) = (b – a)2 = b – a, so det V  0, and therefore V is invertible.

Also, let w = and t0 = .

We next solve the equation Vw = t0, which means that

This equation is equivalent to the first three equations of the system ().

Since V is invertible, therefore w = V-1t0 =

 (w0, w1, w2)T, and we see that each weight is a positive number (since a < b). So these weights and points automatically solve first n + 1(= 3) equations of () above.

Now we look at the last three equation of ():

3 = w0 x03 + w1 x13 + w2 x23

4 = w0 x04 + w1 x14 + w2 x24

5 = w0 x05 + w1 x05 + w2 x25

From (), we have that

3 = c00 + c11 + c22

= c0[w0 + w1 + w2] + c1[w0x0 + w1x1 + w2x2] + c2[w0x02 + w1x12 + w2x22] (from the first three equations of ())

= (c0w0 + c1w0x0 + c2w0x02) + (c0w1 + c1w1x1 + c2w1x12) +

(c0w2 + c1w2x2 + c2w2x22)

= w0(c0 + c1x0 + c2x02) + w1(c0 +c1x1 + c2x12) + w2(c0 + c1x2 + c2x22)

= w0x03 + w1x13 + w2x23 (because f(xi) = 0, i = 0, 1, 2).

This completes the proof of the fourth equation in (). This same procedure can be repeated to prove the last two equations in ():

4 = c01 + c12 + c23(from )

= c0[w0x0 + w1x1 + w2x2] + c1[w0x02 + w1x12 + w2x22] + c2[w0x03 + w1x13 + w2x23]

(from equations 2, 3, 4 of ())

= (c0w0x0 + c0w1x1 + c0w2x2) + (c1w0x02 + c1w1x12 + c1w2x22) +

(c2w0x03 + c2w1x13 + c2w2x23)

= w0(c0x0 + c1x02 + c2x03) + w1(c0x1 + c1x12 + c2x13) + w2(c0x2 + c1x22 + c2x23)

= w0x04 + w1x14 + w2x24. (because () implies that x0, x1, x2 satisfy

t4 = c0t + c1t2 + c2t3).

Likewise,

5 = c02 + c13 + c24(from )

= c0[w0x02 + w1x12 + w2x22] + c1[w0x03 + w1x13 + w2x23] +

c2[w0x04 + w1x14 + w2x24](from equation 3, 4, 5 of ())

= (c0w0x02 + c0w1x12 + c0w2x22) + (c1w0x03 + c1w1x13 + c1w2x23) +

(c2w0x04 + c2w1x14 + c2w2x24)

= w0(c0x02 + c1x03 + c2x04) + w1(c0x12 + c1x13 + c2x14) + w2(c0x22 + c1x23 + c2x24)

= w0x05 + w1x15 + w2x25.(because () implies that x0, x1, x2 satisfy

t5 = c0t2 + c1t3 + c2t4).

This is the program we adopted, created by R. Curto and L. Fialkow:

(*Compute the densities and nodes

for Gaussian quadrature

of degree (precision) 2n+1 over [a,b];{where a < b as shown above}

generates n+1 nodes and densities.

Based on R. Curto and L. Fialkow

[Houston J. Math. 17(1991), 603-635],

MR 93a:47016*)

Gaussian[n_,a_,b_]:=(

For[i=0,i2n+1,i++,

bb[i] = (b^(i+1)-a^(i+1))/(i+1)];

For[i=0,in,i=i+1,

For[j=0,jn,j=j+1,

Mtab[i,j]=bb[i+j]]];

H=Table[Mtab[r,s],{r,0,n},{s,0,n}];{H is the Hankel matrix}

J = N[Inverse[H],100];{100 places of precision}

For[i=0, in,i=i+1,

v[i] = bb[n+1+i]];

w =Table[v[i],{i,0,n}];

z = J.w;{z in the program is the same as C = Jt3 as found in the proof above}

p = 0;

For[i=0,in,i=i+1,

p = p + z[[i+1]] t^i];

q = t^(n+1)-p;{This is f(t) in ()}

roots = Solve[q0,t];{the roots in this case are n + 1 distinct roots}

For[i=1,in+1,i=i+1,

x[i-1] = roots[[i]][[1]][[2]]]; {These are the n + 1 points for Gaussian Quadrature}

For[ i=0,in,i=i+1,

For[j=0,jn,j=j+1,

TTab[i,j]= x[j]^i]];

V= Table[TTab[rr,ss],{rr,0,n},{ss,0,n}];{V is the Vandermonde matrix}

For[i=0, in,i=i+1,

vvv[i] = bb[i]];

w =Table[vvv[i],{i,0,n}];

weights=N[Inverse[V].w,100];){These are the weights for Gau. Quad. in 100 places}

(*Gaussian quadrature estimate of

degree 2n+1 for integral of

predefined function f[x]

over [a,b]; uses results of

previous call to Gaussian*)

G[j_]:=( (*j is ignored*)

ss=0;

For[i=0, in,i++,

ss = ss + weights[[i+1]] f[x[i]]];{This is the calculation of Q(f)}

ss= N[ss]);

In general, the precision incorporated in Mathematica is 6 decimal places, which gave us, inaccurate results in some of our calculations, especially when computing the inverses of matrixes H and V. For this reason, we increased the precision to 100 decimal places, and then the program performed better.

Creation Of Tables:

In the following experiments, we compared Gaussian Quadrature to Simpson’s Rule for several functions, each time comparing the results we got to that derived when we computed Mathematica’s NIntegrate, which we took as the exact value. We also calculated the accuracy of our results by calculating the absolute error as well as the percentrelative error. The absolute error in any estimation is the difference between the measured value of a quantity, say x0, and its actual value x, given by

x x0 – x. The relative error is often a more useful way to determine the accuracy of a measurement; we calculated the percent relative error and represented the values we achieved in our tables. The relative error is the quotient of the absolute error and the actual value, i.e., x = x / x. In our case we used Q(f) – NIntegrate [f] for the absolute error, then we computed the percent relative error as

Q(f) – NIntegrate / NIntegrate 100. We then replaced Q (f) with S (f) so that we could determine the accuracy we achieved in Simpson’s rule as well.

Experiments In 1-dimensional numerical integration:

The functions we worked on are as follows:

(1)f(x) = 3x6 + 5x3 – 2x +1,a = 1, b = 2.

a.For this polynomial of degree 6 = 2  3 we expect Gaussian to be exact for n = 3.

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 71.1786 / 70.6111 / 72.0310 / 0.797290 / 1.197550
2 / 71.1775 / 71.2390 / 0.001545 / 0.076290
3 / 71.1786 / 71.1890 / 0 / 0.015030
4 / 71.1786 / 71.1820 / 0 / 0.004780
5 / 71.1786 / 71.1800 / 0 / 0.001970
10 / 71.1786 / 71.1789 / 0 / 0.000843

For this interval we can see that Gaussian is first exact when n = 3. Simpson was not exact until n = 11. Also, the Gaussian relative error reached under 1% first at n = 1 but Simpson got there when n = 2.

b.We changed the interval, making a = -8, and b = 0, producing this graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 893731 / 752257 / 1109060 / 15.8296 / 24.0933
2 / 891484 / 909384 / 0 / 1.75140
3 / 893731 / 896904 / 0 / 0.35503
4 / 893731 / 894744 / 0 / 0.11335
5 / 893731 / 894148 / 0 / 0.04666
10 / 893731 / 893758 / 0 / 0.00302

For this interval, we can see that Gaussian is first exact when n = 3. Simpson was not exact until n = 44. Simpson’s relative error reached under 1% first at n=3. In general, increasing the interval size causes Simpson’s Rule to need a larger value of n for the same accuracy.

c.We tried yet another interval, this time a = -10 and b = 10, resulting in this:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 8.57145*10^6 / 2.22224*10^6 / 2*10^7 / 74.07393 / 133.333
2 / Error / 1.0625*10^7 / - / 23.9580
3 / 8.57145*10^6 / 9.02608*10^6 / 0 / 5.30400
4 / Error / 8.72072*10^6 / - / 1.74148
5 / 8.571415*10^6 / 8.63362*10^6 / 0 / 0.72531
10 / Error / 8.57542*10^6 / - / 0.04632

For Gaussian we can see that there was an error for every even n because of the indeterminate expression 00. The Gaussian points are distributed symmetrically about the midpoint (a+b)/2. So when n is even, the midpoint is a Gaussian point. For a = -10 and

b = 10, the midpoint is 0. When the program computes x[j]i for x[j] = 0 and i = 0, it is trying to compute 00, which Mathematica cannot compute. We could have modified the code to cover this special case. Gaussian was exact at n = 3 and Simpson at n = 40. The large interval length causes the delay in Simpson.

The next function we experimented with was

(2)f(x) = (3x2 + 4) / (2x2 – 1),

which is defined on the three intervals (-, - Sqrt ½), (-Sqrt ½, Sqrt ½) and

(Sqrt ½, ).

f(x) is undefined when 2x2-1 = 0. This occurs when 2x2 = 1, or x2 = ½, or x =  Sqrt (½), so x cannot equal  (½)   0.707.

a.We begin with a = 2, b = 4, resulting in this graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 56.5398 / 56.5351 / 56.5472 / 0.008313 / 0.01309
2 / 56.5395 / 56.5405 / 0.000531 / 0.00124
3 / 56.5398 / 56.54 / 0 / 0.00036
4 / 56.5398 / 56.5398 / 0 / 0
5 / 56.5398 / 56.5398 / 0 / 0
10 / 56.5398 / 56.5398 / 0 / 0

Here, it was observed that Gaussian reached the exact value at n = 3 and Simpson at

n = 4. Both were under 1% at n = 1. This reflects the fact that the curve is fairly smooth in this interval.

b.We tried this interval, a = -0.7 and b = 0.7, resulting in the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 14.2714 / -7.63083 / -96.3807 / 46.53061 / 575.942
2 / Error / -51.8588 / - / 263.376
3 / -10.5608 / -37.4755 / 26.00025 / 162.592
4 / Error / -30.4984 / - / 113.703
5 / -12.1113 / -26.4349 / 15.13587 / 85.2299
10 / Error / -18.8630 / 32.1734

For the Gaussian, it took n = 19 to get to the exact value and for every even n there was an error caused by the indeterminate expression 00 (see the comment in (1c) above). Simpson reached the exact value at n = 662. Both methods find it difficult to work with a function that is bending sharply in the interval.

(3)f(x) = Sin(x)

We tried first the interval a = 0 and b = 4.

Relative Error (%)

n / NIntegrate / Gaussian / Simpson’s / Gaussian / Simpson’s
1 / 1.65364 / 1.47012 / 1.92026 / 11.09794 / 16.1232
2 / 1.66018 / 1.66405 / 0.395490 / 0.62952
3 / 1.65352 / 1.65560 / 0.007257 / 0.11853
4 / 1.65365 / 1.65424 / 0.000600 / 0.03628
5 / 1.65364 / 1.65388 / 0 / 0.01451
10 / 1.65364 / 1.65366 / 0 / 0.00121

Gaussian was exact at n = 5 and Simpson at n = 20. Both Gaussian and Simpson reached

under 1% at n = 2. Although there is bending, it is not as severe as in Ex. 2b, so convergence is moderate, neither fast nor very slow.

b.With this third function, we tried another interval, where a = 5 and b = 9.

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 1.19479 / 1.06220 / 1.38743 / 11.09735 / 16.1233
2 / 1.19951 / 1.20231 / 0.395050 / 0.62940
3 / 1.94700 / 1.19618 / 0.007533 / 0.11634
4 / 1.97900 / 1.19522 / 0.000840 / 0.03599
5 / 1.97900 / 1.19497 / 0 / 0.01507
10 / 1.97900 / 1.19480 / 0 / 0.00084

Gaussian was exact at n = 5 and Simpson at n = 15. Both their relative errors were under 1% at n = 2. The only difference here from the previous example is that the interval shifted, but since Sin x is periodic there was no major change in the performance of the methods.

c.The last interval we tried for this function was, a = -2 and b = 1 producing the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / -0.956449 / -0.931801 / -0.992764 / 2.577032 / 3.79686
2 / -0.956935 / -0.958250 / 0.050810 / 0.18830
3 / -0.956444 / -0.956791 / 0.000523 / 0.03576
4 / -0.956449 / -0.956556 / 0 / 0.01119
5 / -0.956449 / -0.956493 / 0 / 0.00460
10 / -0.956449 / -0.956452 / 0 / 0.00031

Gaussian was exact at n = 4 and Simpson at n = 17. Both their relative errors were under 1% at n = 2. Overall, for this function, the relatively slow bending and small changes in the function values cause both methods to work well even over large intervals.

(4)f(x) = Sin(5x)

The first interval we considered was a = -4 and b = 5 producing the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / -0.116624 / 4.909280 / 2.022890 / 4309.494 / 1834.54
2 / 2.840070 / 1.017510 / 2535.236 / 972.4705
3 / 2.333700 / -0.729409 / 2101.046 / 525.4360
4 / 3.892490 / 0.997792 / 3437.641 / 955.5632
5 / -0.262303 / 0.320191 / 124.9130 / 374.5498
10 / 2.157990 / -0.154216 / 1950.382 / 32.23350

Due to the rapid fluctuations in the graph, we can expect both methods to be slow. We kept on trying larger and larger values of n. Gaussian was not exact until n = 20. Simpson on the other hand reached the exact value at n = 150.

b.The second interval we tried for this function is a = 0 and b = 1, producing the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 0.143268 / 0.0760515 / 0.239161 / 46.91662 / 66.9326
2 / 0.1470920 / 0.145643 / 2.669120 / 1.65773
3 / 0.1431550 / 0.143686 / 0.078873 / 0.29176
4 / 0.1432700 / 0.143395 / 0.001400 / 0.08865
5 / 0.1432680 / 0.143319 / 0 / 0.03560
10 / 0.1432680 / 0.143271 / 0 / 0.00209

On this shorter interval, with fewer oscillations, we expect faster convergence. Gaussian reached the exact value at n = 5 and Simpson at n = 14. Both reached under 1% at n = 3.

c.The third interval we tried was a = -1 and b = -0.5, producing the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 0.216961 / 0.214533 / 0.220558 / 1.119095 / 1.65790
2 / 0.216994 / 0.217154 / 0.01521 / 0.08896
3 / 0.216961 / 0.216998 / 0 / 0.01705
4 / 0.216961 / 0.216973 / 0 / 0.00553
5 / 0.216961 / 0.216966 / 0 / 0.00230
10 / 0.216961 / 0.216962 / 0 / 0

Here the graph is even smoother than before because the interval is much smaller. Gaussian was exact at n = 3 and Simpson at n = 10. Both were under 1% at n = 2.

(5)f(x) = Sqrt(x)

We started this function by considering the interval a = 10 and b = 20, producing the following graph:

Relative Error (%)

N / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 38.5466 / 38.5484 / 38.5439 / 0.00467 / 0.007005
2 / 38.5467 / 38.5464 / 0.00026 / 0.000519
3 / 38.5466 / 38.5466 / 0 / 0
4 / 38.5466 / 38.5466 / 0 / 0
5 / 38.5466 / 38.5466 / 0 / 0
10 / 38.5466 / 38.5466 / 0

In this interval the graph is growing slowly since the derivative ½ x - ½ is small. As x gets larger and larger x – ½ approaches zero. So we expect rapid convergence. Here both Gaussian and Simpson reached the exact value at n = 3. Also, both were under 1% at

n = 1.

b.The next interval considered, was a = 0, b = 10, producing the following graph:

Relative Error (%)

N / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 21.0819 / 21.3102 / 20.1776 / 1.08292 / 4.289462
2 / 21.1613 / 20.7612 / 0.37663 / 1.521210
3 / 21.1186 / 20.9072 / 0.17408 / 0.828673
4 / 21.1018 / 20.9684 / 0.09439 / 0.538377
5 / 21.0939 / 21.0007 / 0.05692 / 0.385165
10 / 21.0840 / 21.0531 / - / 0.136610

Here the interval includes x = 0, where there is a vertical tangent and very rapid growth in the function values, so we expect slower convergence. For Gaussian, the exact value was reached with n = 57. For Simpson, it did not reach the exact value until n = 10000.

c.The last interval considered, was a = 0 and b = 1, producing the following graph:

Relative Error (%)

n / NIntegrate / Gaussian / Simpson / Gaussian / Simpson
1 / 0.666667 / 0.673887 / 0.638071 / 1.08300 / 4.289398
2 / 0.669180 / 0.656526 / 0.37695 / 1.521149
3 / 0.667828 / 0.661144 / 0.17415 / 0.828450
4 / 0.667297 / 0.663079 / 0.09450 / 0.538200
5 / 0.667046 / 0.664100 / 0.05685 / 0.385050
10 / 0.666750 / 0.665759 / 0.01245 / 0.136200

Although the interval is now much smaller, we still have the region of rapid change, near x = 0, so we cannot expect rapid convergence. For Gaussian, it did not reach the exact value until n = 49, while in the case of Simpson it reached the exact value when n = 850.

Conclusion for the experiments on the comparison of Gaussian Quadrature and Simpson’s Rule.

In conclusion, we can see that in these examples, Gaussian always produces the exact value more quickly than Simpson. We also observed that picking an interval close to the origin, produced a less rapid growth in which we expect a slower convergence, and picking a wider interval produces a more rapid convergence.
2-Dimensional Iterated Gaussian Quadrature Over A Rectangular Region.

In this section we consider two-dimensional iterated Gaussian Quadrature. Recall that the one-dimension Gaussian Quadrature rule, with n + 1 points xi in [a,b] and weights wi > 0, is given as Q(f) = w0f(x0) + w1f(x1) + … + wnf(xn), and is exact if f is any polynomial with degree f  2n + 1, i.e., Q(f) = [a,b] f(x) dx. Now consider a function f(x,y) defined on a rectangular region R = [a,b] x[c,d].

Example:If f(x,y) = 3x3y2 + 4xy – 7x3y +2, then the total degree of f is 5. Also, the highest degree of x in f is 3, degx f = 3, and the highest degree of y in f is 2, degy f = 2.

Form Calculus, we know that

V R f(x,y) dxdy = [c,d][a,b] f(x,y) dx dy, = [c,d] F(y) dy, where F(y) = [a,b] f(x,y)dx. In order to approximate this integral with 2-dimensional iterated Gaussian Quadrature, we apply Gaussian Quadrature first to interval [a,b], and get n + 1 points x0, x1,…., xn in [a,b] and positive weights w0, w1,…, wn so that if g(x) is a polynomial of degree  2n + 1, then [a,b] g(x) dx = 0in wi g(xi). We then apply Gaussian Quadrature to interval [c,d]. We get n + 1 points y0, ….,yn in [c,d] and positive weights v0, …., vn so that if h(y) is any polynomial of degree  2n + 1, then [c,d] h(y) dy = 0jn vj h(yj). We now consider

(n + 1)2 grid points in this form:

(x0, y0), (x0, y1) ………, (x0, yn),

(x1, y0), (x1, y1), ………., (x1, yn),

(xn, y0), (xn, y1), ………., (xn, yn).

For 0  i, j  n, we let uij = wivj> 0. We now define iterated Gaussian Quadrature Q( ) as follows. For a function f(x, y) defined on rectangle [a,b] x [c,d],

Q(f) = 0jn0in uij f(xi, yi).

We will show that if f(x,y) is a polynomial, and if degx f  2n+1 and degy f  2n+1, then Q(f) = R f(x,y) dx dy. We can see this since we have

Q (f) =

=

=

=

(by Gaussian Quadrature on [a,b] for gj (x))

=

=(by Gaussian Quadrature on [c,d] for h(y), since degy h  2n + 1)

=

=

Therefore Q(f) = R f(x,y) dx dy.

For an arbitrary continuous function on R, we can approximate, R f(x,y) dx dy  Q(f).

Here is the picture illustrating n = 2:

We employed the program for one-dimensional Gaussian Quadrature together with the program below in order to implement 2-dimension iterated Gaussian Quadrature over a rectangular region. The addition to the program goes thus:

(*Iterated Gaussian quadrature estimate of

degree 2n+1 for integral of predefined

function f[x,y] over rectangle [a,b] x [c,d];

exact if f is a polynomial whose degrees in x and y

are each  2n+1; the value is stored in II*)

G2dim[n_,a_,b_,c_,d_]:=(

Gaussian[n,a,b];{Gaussian Quadrature in the x-direction for [a,b]}

For[i=0,in,i++,

xx[i]=x[i];

xweight[i]= weights[[i+1]]];

Gaussian[n,c,d];{Gaussian Quadrature in the y-direction for [c,d]}

For[i=0,in,i++,

yy[i]=x[i];

yweight[i] = weights[[i+1]]];

II=0;

For[j=0,jn,j++,

For[i=0,in,i++,

II = II + xweight[j] yweight[i] f[xx[j],yy[i]]]]; {2-D iterated Gaussian Quadrature}

Below is found the various experiments we worked on concerning iterated 2-dimensional Gaussian Quadrature.
Experiments With 2-D Gaussian Quadrature Over A Rectangular Region:

(1) f(x,y) := 6x7y4-2xy7+3x3y3

  1. For polynomials, we will verify that the program does give exact results if degx f  2n+1 and degy f  2n+1. The first rectangle we tried for this function is a = 1, b = 2, c = 0 and d = 3, producing the following graph: