7.1.2013
Numerical Analysis
Module XVIII
Approximation by Least Squares Approach
The objectives attained by this module are:
(a)learn to handle fitting of polynomial by least squares method
(b)learn to distinguish between discrete data and continuous function
(c)study that there are two approaches of fitting a polynomial by least squares method for a continuous function; one approach will require expansion of for a given polynomial while the other approach will go by the way of normal equation.
18.1. Introduction:
Sometimes we may be confronted with finding a function which may represent a set of data points which are given for both arguments x and y. Often itmay be difficult to find such a function except by certain techniques. One of the known methods of fitting a polynomial function to this set of data is the least squares approach. The least squares approach is a technique which is developed to reduce the sum of squares of errors in fitting the unknownfunction.
The least squares approximation methods can be classified into two, namely the discrete least square approximation and the continuous least squares approximation. The first involves fitting a polynomial function to a set of data points using the least squares approach, while the latter involves fitting a polynomial function to a given function. We shall study that there are two approaches of fitting a polynomial by least squares method for a continuous function; one approach will require expansion of for a given polynomial while the other approach will go by the way of normal equation.
18.2 Discrete Least Squares Approximation
The basic idea of least square approximation is to fit a polynomial function to a set of data points having a theoretical solution
… (1)
The aim is to minimize the squares of the errors. In order to do this, suppose the set of data satisfying the theoretical solution (1) are . Our attempt isto fit a polynomial using these set of data points to approximate the theoretical solution The polynomial to be fitted to these set of points will be denoted by to denote a polynomial of degree m. If the polynomial is identical with the Lagrangian interpolation polynomial. In this session we discuss the case The curve or line fitted to the observation will be regarded as the best fit to if the difference between and , is least. That is, the sum of the differences
should be the minimum. The differences obtained from could be negative or positive and when all these are summed up, the sum may add up to zero. This will not give the true error of the approximatingpolynomial. Thus to estimate the exact error sum, the square of these differences are more appropriate. In other words, we usually consider the sum of the squares of the deviations to get the best fitted curve. Thus the required equation for the sum of squares error is then written as
…(2)
which will be minimized, where is given by
…(3)
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
18.3 Fitting of polynomials (Discrete case)
We shall now derive the formula in discrete form that fits a set of data points by least squares technique. The aim of least squares method is to minimize the error of squares.
To do this we begin by substituting equations (1) and (3) in equation (2), and since this gives:
…(4)
To minimize S, we must differentiate S with respect to and equate to zero. Hence, if we differentiate equation (4) partially with respect to and equate each to zero, we shall obtain the following:
or
With the notations and we get the following system of equations:
…(5)
These equations are usually called normal equations of the least square method.
Solving the system (5) of equations to determine and substituting into equation (3) gives the best fitted curve to (1).
Problem: By using the least squares approximation, fit
(i) a straight line and (ii) a parabola
to the following data:
/ 1 / 2 / 3 / 4 / 5 / 6/ 120 / 90 / 60 / 70 / 35 / 11
Which of these two approximations has least square error?
Solution:
(a) In order to fit a straight line to the set of data above, we assume the equation of the form
Since tabulated values are given, we haveAlso Hence the normal equations necessary to determine the unknowns and can be obtained from equation (5) as:
Since , , , and , the above system is equiavalent to
where the summation over j runs from 0 to 6. Hence we shall need to construct columns for vales of and in addition to x and y values already given. The following table shows the necessary columns:
0 / 1 / 120 / 1 / 1201 / 2 / 90 / 4 / 180
2 / 3 / 60 / 9 / 180
3 / 4 / 70 / 16 / 280
4 / 5 / 35 / 25 / 175
5 / 6 / 11 / 36 / 66
Sum / 21 / 386 / 91 / 1001
Hence the system of equations becomes
The solution of the above system of equations can found to be
Therefore the straight line fitted to the given data is
(b) We assume the equation of the parabola be
Hence the required normal equations to determine the unknowns and are:
In addition to the above table, we construct the necessary columns in the following Table:
0 / 1 / 1 / 1201 / 8 / 16 / 360
2 / 27 / 81 / 540
3 / 64 / 256 / 1120
4 / 125 / 625 / 875
5 / 216 / 1296 / 396
Sum / 441 / 2275 / 3411
Substituting into the normal equations, we have
The solution of the above system of equations can found to be
Hence the fitted curve is
We can now consider the accuracy of the two fitted functions to the data and decide on which one of the two approximations is better. To do this we may retain values of x for j = 0, 1, . . . , 5 and evaluate the corresponding y values.
For example, when x = 1, the linear approximation gives y = 134.33 – 20 = 114.33, where as for the same x = 1 the quadratic approximation gives
For the set of values of x tabulated we have the corresponding values for y in both approximations. The squares of the errors are considered as . The table below describes the errors.
1 / 120 / 114.33 / 114.9 / 32.1489 / 26.012 / 90 / 94.33 / 94.2 / 18.7489 / 17.64
3 / 60 / 74.3 / 73.9 / 204.49 / 193.21
4 / 70 / 54.3 / 53.9 / 246.49 / 259.21
5 / 35 / 34.3 / 31.3 / 0.49 / 13.69
6 / 11 / 14.3 / 14.9 / 10.89 / 15.21
Sum / 513.2578 / 524.97
The sums of squares of the error of fitted lines by linear function y(L) and quadratic function y(Q) are shown above in Table. The comment here is that the sum of squares of error of the linear (513.26) is slightly lower than that of the quadratic (524.97). Hence in this example, the linear functionis a better approximation to the data above.
Problem: Fit a function of the form to the following data by the method of least squares.
Solution:
The system of normal equations is
where
For j=0 to 8 the values are as in the following Table:
The calculation on these values gives;
Hence the system of normal equations becomes
The above system imply the values of as -1.919, 0.2783 and 0.001739 respectively. Hence using least square method, the function that fits the given data is,
.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
18.3 Least Squares Approximation (Continuous Case)
Now we shall consider the situation in which a continuous function f(x) is to be approximated by the least squares method in terms of a polynomial. In this case since data are no longer given, it would not be necessary to create tables by columns but rather by carrying out some integration.
Fitting an Approximate Polynomial (Continuous case)
If we wish to find a least square approximation to a continuous function f(x), our approach in the discrete case must be modified since the number of points at which the approximation is to be measured is now infinite (often uncountable). Therefore, we cannot use a summation as where is the continuous function and is the approximating polynomial funcion.
The continuous least square Problem
Let f(x) be a continuous function which in the interval is to be approximated by a polynomial linear combination
The problem is to minimize
…(6)
For doing this, one of following methods to obtain the coefficients of is used.
First Method
The first approach for minimizing equation (6) is by carrying out an expansion of theintegrand , next carry out the integration and then by means of calculus, obtain the minimum by setting
We illustrate this in the next example.
Problem: Find the least square straight line that provides the best fit to the curve over the interval
Solution:
Let the line be We must minimize
Expanding the integrand, we obtain
Integrating, we obtain
Evaluating we get
For a minimum error, we must set
and
Hence, we obtain
Solving, we get
Hence the least squares approximation is
This straight line is only a linear approximation to the given curve. We could have as well found other polynomial approximation using the same least squares technique. It will be observed that if ) is a polynomial of higher degree say n = 3 or more, the expression may not be easy to expand before integrating, so we must seek another approach for the minimization and that leads to the following second method.
Second Method
Now, suppose we wish to find the least squares approximation, using a polynomial of degree k to a continuous function y over [a,b]. In such a case, we must minimize the integral
If we do not want to expand the expression in the squared bracket, then we must first find the normal equations. In other words, we derive the normal equations by obtaining before integrating the resulting function and evaluating the result.
Doing this we obtain
The factor (–2) that appeared first in the integral can be ignored since the right hand side of the equation is zero. Hence, the normal equations can be written as
This will give (k+1) linear equations in the (k+1) unknowns which can be solved simultaneously by any algebraic process.
This approach may be simpler than the first one and we therefore suggest this second method. However any of the two techniques may be used and are both valid.
Problem: Find the least squares quadratic which best fits the curve over the interval
Solution:
We need to minimize
By the alternative method, we shall first of all obtain the normal equations. Thus we have:
Integrating and evaluating within the limits, we obtain the following three equations:
Solving these equations simultaneously we obtain
Thus the least squares quadratic function is
18.4. Assignments
1. The table below gives the readings from a laboratory experiment.
Time t / 2 / 3 / 5 / 6 / 9Reading y / 8 / 16 / 51 / 82 / 153
Fit (a) a linear function and (b) a quadratic polynomial to the above data by method of least squares and determine. Which of the two is a better approximation?
2. How many normal equations will be needed to fit a cubic polynomial? Hence list the entire necessary variables for such fitting.
- Find the least squares linear polynomial , which best fits the curve over the interval
- Find the least squares quadratic , which best fits the curve over the interval
18.5. Quiz
- The least squares approach is a technique which is developed to reduce the ------in fitting the unknown function.
(i) Sum of errors (ii) sum of squares of errors
(iii) Sum of absolute values of errors (iv) None of these
- The curve fitted to the observation will be regarded as the best fit to if
(i) is minimum
(ii) is maximum
(iii) is minimum
(iv) None of these
- The number of normal equations to fit a curve with m parameters is,
(i) m
(ii) 2m
(iii) m+1
(iv) None of these
18.6. Glossary
1. Normal equations:Solving a set of equations we fit a curve for a given set of points with minimum sum of squares of errors. The equations solved are called normal equations. If E represents the sum of squares of errors while fitting a curve f(x) with a parametersa and b, using a given set of points, ,are normal equations.
2. Least square method: It is a method of fitting a curve to a given set of points making the sum of square of errors minimum.
18.7. FAQs
1. What is the method of least square and why we use it?
2. How we perform the fitting of a curve using the method of least square?
18.8. References:
- C.E.Froberg, Introduction to Numerical Analysis (Second Edn.), Addison-Wesley, 1979.
- James B Scarborough, Numerical Mathematical Analysis, (Oxford and IBH Publishing Co. Pvt.Ltd.), 1966.
- M.K.Jain,S.R.K.Iyengar,R.K.Jain, Numerical methods: Problems and solutions (New Age International (P) Ltd.), 1996.
- M.K.Jain,S.R.K.Iyengar,R.K.Jain, Numerical methods for Scientific and Engineering Computation (New Age International (P) Ltd.), 1999.
- S.S. Sastry, Introductory methods of Numerical Analysis (Third Edn.), PHI India Pvt Ltd, 1998.
- Kandasamy,Thilakavathy,Gunavathi, Numerical methods(Third Edn.), S.Chand and Company Ltd, 2010.
- Arumugam,Thangapandy Issac, Somasundaram, Numerical methods (Second Edn.), Scitech Publications (India) Pvt.Ltd., 2001.
18.9. Summary
Fitting a polynomial to discrete data by least squares methods is easily handled by creating tables of values and generating all necessary columns that will enable one to obtain the normal equations. The normal equations are then solved simultaneously to determine the unknowns which are then substituted into the required approximate polynomial equation. In this section we have also observed that there are two ways for fitting a polynomial to a continuous function f(x).
*************************
ANSWERS TO QUIZ AND FAQs
Answers to Quiz:
(1) (ii) sum of squares of errors
(2) (iii) is minimum
(3) (i) m
Answers to FAQs:
1. What is the method of least square and why we use it?
Ans: The least squares approach is a technique which is developed to reduce the sum of squares of errors in fitting the unknown function. In some cases it may be difficult to find a function except by certain techniques for a given set of points. In such a situation one of the known methods of fitting a polynomial function to the set of data is the least squares approach.
2. How we perform the fitting of a curve using the method of least square?
Ans:To fit a curve of the formto a given set of m points, the problem is to estimate the parameters involved in. For this the method of least square suggest to find the parameter values which minimizing the total sum of square of the errors happening in the fitting. For this an expression for the total sum of square of the errors is derived and using differential calculus form normal equation and solving them, obtain best estimates of the parameters and get a best fitted curve suit to the given data points.
*******************************
Prepared by :Dr.Aneesh Kumar.K
Content edited by:Dr. Bijumon.R