Finding All Zeros of a Polynomial of Degree nUsing Evolutionary Algorithms

Peyman Zarrineh2, Caro Lucas1,3, and Babak N. Araabi1,3

1 Control and Intelligent Processing Center of Excellence,
Electrical and Computer Eng. Dept., University of Tehran, Tehran, Iran
2 Mathematics and Computer Science Dept., University of Tehran, Tehran, Iran
3 School of Cognitive Sciences, Institute for Studies on Theoretical Physics and Mathematics, Tehran, Iran

Abstract:-Evolutionary algorithms are known as a general optimization problem solver. They can be used to solve numerical problems, or they can be combined with other numeric methods. Finding zeros of functions, especially polynomial functions, is an important problem. In this paper the polynomial functions is introduced, also some intelligent approaches which was used to find a polynomial root is reviewed, then the evolutionary algorithms is used to find zeros of some polynomials and this method is combined with Newton method which inherits it’s speed from Newton’s method but it does not have Newton’s method problems.

Key-Words:-Evolutionary algorithms, Polynomials, Newton method, Finding zero of polynomials, Numerical methods, Intelligent approaches.

1 Introduction

Finding a solution for many of social, economical and scientific problems leads to the equations such as .

In particular determining the zeros of a polynomial

(1)

where ai is a real or a complex coefficient, has captured a great attention in Mathematics. It can be shown that there is no general solution for finding roots of polynomials of degree five or more. (Galois result [5])

So the use of numerical methods such as fix point iteration or Newton methods is inevitable.

Although all of the derivatives of a polynomial are continuous and can be evaluated easily, even for polynomials of maximum degree 20, approximating zeros of the polynomial can be so difficult, especially when there are some multiple roots. [8] [10]

2 Problem Statement

Some well-known methods for approximating the zeros of where ai is a real number are: fix point iterations, Newton method and bisection. [2] [8] [10]

2.1 fix point iterations

These methods change to . Starting from , is defined as . In this method convergence of the sequence should be checked. [2] [10]

2.2 Newton Method

Using fixed point method in which is called Newton method. There are some special methods for polynomials. The first problem with numerical methods is their convergence. Another problem with these methods is that they can not find exact solutions, so finding one of the answers does not lead to find the others. The degree of a polynomial can be decreased by the help of one of its zeros and factorization by this root. But the number which has been used as root, must be exact, and because numerical methods can not find the exact solutions, they can not used in this process. [2] [10]

3 Related works

Optimization is one of the main applications of evolutionary algorithms. Any optimization problem can be converted to a problem of finding roots of a function.

Genetic algorithms have been successfully used for solving some optimization problems. As an example, the main purpose of [1] is wire-antenna designs, in which a basic equation has been minimized using genetic algorithms.

In [6] a two layer feed forward neural network finds the complex roots of a polynomial, and in [7] two layer neural network is extended to find complex roots which are near together.

4 Methodology

Genetic algorithms are known as a general purpose searcher [3] [4] [9]

, so they can be used to find zeros of polynomials with degree of n by searching on real numbers. In our simulation each gene has chromosomes in which n is the polynomial degree, and each chromosome is a root of the polynomial. In each generation there are m genes which is an arbitrary constant number. In this simulation . The data type which has been used for genotype is binary code and for phenotype is real number. [4] [9]

For finding all zeros of function f, two functions and , should be minimized:

(2)

(3)

Selection from genes is done by turnoment method [4] [9], and fitness of each gene is determined with . Each gene whose is lower, has better fitness. This function () has been found by trial and error.

Two kinds of simulation are done. First, in each generation, each gene is mutated one time and makes a new gene, then selection is done over new genes and previous genes. This process is done over 10000 generations. For the first generation mutation is occurred with probability 35% over each bit. As a new generation has been created, the probability of mutation will be decreased. For the last generation this probability is 5%. The above process is repeated for 30 times, to see whether the better answers could be achieved.

The second simulation is done by combining mutation with Newton method. For making new genes, In each generation each gene mutate one time, the same as the previous way, and then Newton method carried out for each chromosome of each gene. Each chromosome of a gene is a root of our function and Newton method helps to converge them. Newton method is used for 20 trials, and then selection is done over new genes and previous genes. This process is done over 1000 generations.

5 Simulation Results

Table1 shows polynomials which have been used as benchmark. The first experience, were simulation with genetic methods alone. The results were so satisfactory. After 10000 generations the answers became very close to exact answers. When the degree of polynomial was lower than six, chromosomes converged to n-1 answers from n possible answers, because two chromosomes converged to one common answer. When the degree of polynomial was between six and nine, generally n-2 answers from n possible answers were found. By repeating this process for 300000 generations (Figure1), the answers were just a little better than using 10000 generations. (Figure3)

In the second observation, combining genetic methods with Newton method has been reached to great answers. Practically, mutations of genetic methods solved the problem of convergence of Newton method. When the degree of polynomial was lower than seven, all the answers were found, and when it was between seven and nine, at least n-1 answers were found.

From comparing Figure2 with Figure1 we observe that combining Newton method and genetic methods solved the problem better than genetic methods alone.

Although in the expression e2, repeated zeros may cause division by zero problem, in the simulations it was not make many problems, because in these cases two different but close numbers were achieved which represented the repeated roots.

6 Conclusion

In this paper we use genetic algorithms to solve polynomial equations. Genetic algorithms found all zeros’ of polynomial by searching over space Rn. We can use this kind of search to solve various equations. Also combining them with Newton method solve the convergence problem of Newton method. hence it shows the ability of genetic algorithms to make hybrid methods by combining with other methods.

7 Future Work

We used genetic methods to find all real zeros’ of polynomials. They can also be used for finding all complex zeros’ of polynomials too. And by combining Newton method and genetic methods we can find zeros’ of any function. To achieve this goal we use the fact that if the number of zeros’ of a function is less than solutions that we have determined, then the error grows rapidly and we get bad fitness. So we can assume some number as the number of zeros’ of function and test each number and select the greater one and make good fitness.

References:

[1]  Altshuler E.E., Linden D.S., Wire-Antenna Designs using Genetic Algorithms, IEEE Antennas and Propagation Magazines, Vol. 39, No 2, April 1997.

[2]  Brent R.P., Quasi-Newtons and Their Application to Function Minimization, Math. Comput. 19, P 577-593, 1967.

[3]  Davis L., Handbook of Genetic Algorithms, Van Nostran Reinhold, New York, 1991.

[4]  Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reeding, MA, 1989.

[5]  Herstein I.N., Topics in Algebra, Macmillan publishing company, 1990.

[6]  Huang D.S., Finding Roots of Polynomials Based on Root Moments, IOONIP-2000 Proceeding, Vol. II, Taejon, Korea, Nov. 14-17, 2000, P 118.

[7]  Huang D.S., Ip H.S., Chi Z., Wong H.S., Dilation Method for Finding Close Roots of Polynomials Based on Constrained Learning Neural Network, Elsevier, Physics Letters A, 309, 2003, P 443-451.

[8]  Marde M., Geometry of Polynomials, Providence, R.I. Amer. Math. Soc., 1966.

[9]  Mitch M., An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, 1996.

[10] Stoer J., Bulirsch R. Introduction to Numerical Analysis, 3rd ed., Springer, 2002.

Figure1 –Fitness function after 300000 generation when genetic methods
used alone for polynomials of Table1.

Figure2 - Fitness function after 1000 generation when Genetic methods
is combined with Newton method for polynomials of Table1.

Figure3- comparing 1000 to 300000 generations fitness functions
when genetic methods used alone for polynomials of Table1.

Table1 - polynomials that are used as benchmark

(x - 1) (x – 2) (x - 3) = x3-6*x2+11*x
(x - 3) (x - 9) (x - 10) = x3-22*x2+147*x-270
(x - 6) (x - 2) (x - 11) = x3-38*x2+423*x-1386
(x - 1) (x - 2) (x – 6) = x3-9*x2+20*x-12
(x - 1) (x - 5) (x - 8) = x3-14*x2+53*x-40
(x + 1) (x - 1) (x - 5) = x3-5*x2-x+5
(x + 1) (x + 2) (x – 9) = x3-6*x2-25*x-18
(x + 2) (x - 5) (x – 13) = x3-16*x2+29*x-130
(x - 1) (x + 6) (x – 8) = x3-3*x2-46*x+48
(x + 4) (x + 7) (x + 11) = x3+22*x2+149*x+308
(x + 1) (x -.5) (x -.25) = x3+.25*x2-.625*x+.125
(x + 4) (x + 7) (x + 11) (x - 2) = x4+24*x3+193*x2+606*x+616
(x + 4) (x + 2) (x - 5) (x - 1) = x4-23*x3-18*x+40
(x + 1) (x + 12) (x - 10) (x - 45) = x4-42*x3-253*x2+5190*x+540
(x + 1) (x + 9) (x - 1) (x – 4) = x4+5*x3-37*x2-5*x+36
(x + 1) (x + 6) (x - 4) (x - 7) = x4-4*x3-43*x2+130*x+168
(x + 1) (x + 2) (x + 7) (x – 9) = x4+x3-67*x2-193*x-126
(x + 2) (x + 11) (x + 13) (x - 7) = x4+19*x3+9*x2-1051*x-2002
(x + 1) (x + 3) (x + 8) (x – 5) = x4+7*x3-25*x2-151*x-120
(x + 5) (x + 12) (x - 3) (x - 9) = x4+5*x3-117*x2-261*x+1620
(x + 1) (x - 2) (x – 3) (x - 5) = x4-9*x3+21*x2+x-30
(x - .1) (x - .2) (x - .3) (x - .9) = x4-x3+.350*x2-.050*x+.0024
(x – 1) (x – 2) (x – 3) (x – 4) (x – 5) = x5-15*x4+85*x3-225*x2+274*x-120
(x + 4) (x + 5) (x – 1) (x – 2) (x – 8) = x5-2*x4-53*x3-2*x2+376*x-320
(x + 3) (x + 6) (x + 7) (x – 2) (x – 5) = x5+9*x4-21*x3-281*x2-72*x+1260
(x + 1) (x + 2) (x + 7) (x + 9) (x – 1) = x5+18*x4+94*x3+108*x2-95*x-126
(x + 2) (x + 2) (x – 1) (x – 4) (x - 11) = x5-12*x4-x3+128*x2+60*x-176
(x + 1) (x + 5) (x + 9) (x – 1) (x – 4) = x5+10*x4-12*x3-190*x2+11*x+180
(x + 4) (x + 5) (x + 9) (x – 6) (x – 9) = x5+3*x4-115*x3-363*x2+2754x+9720
(x + 1) (x + 5) (x + 8) (x – 2) (x – 6) = x5+6*x4-47*x3-216*x2+316*x+480
(x + 1) (x + 5) (x + 6) (x – 1) (x – 2) = x5+9*x4+7*x3-69*x2-8*x+60
(x + 1) (x + 2) (x + 8) (x – 1) (x – 2) = x5+8*x4-5*x3-40*x2+4*x+32
(x - .1) (x - .2) (x - .3) (x - .31) (x -.9) = x5-1.31*x4+.66*x3-.1585*x2+.01790*x-.000744
(x – 1) (x – 2) (x + 3) (x – 8) (x + 10) (x + 22) = x6+24*x5-43*x4+1992*x3+396*x2+12104*x-10560
(x – 1) (x - 2) (x - 3) (x – 4) (x – 5) (x – 6) = x6-21*x5+175*x4-735*x3+1624*x2-1769*x+720
(x + 1) (x + 2) (x - 13) (x + 4) (x – 5) (x – 6) = x6-17*x5+19*x4+493*x3-500*x2-4076*x-3120
(x + 5) (x + 9) (x - 29) (x - 31) (x – 16) (x – 1) = x6-68*x5+1262*x4-368*x3-93103*x2-19590*x-287680
(x + 2) (x + 9) (x – 1) (x – 30) (x + 17) (x + 8) = x6+5*x5-657*x4-10273*x3-45008*x2-17508*x+73440
(x + 3) (x + 2) (x + 5) (x – 2) (x – 3) (x – 5) = x6-38*x4+361*x2-900
(x - 1) (x – 2) (x + 1) (x + 7) (x – 9) (x + 9) = x6+10*x5-26*x4-260*x3+529*x2+250*x-504
(x + 1) (x + 2) (x + 3) (x + 4) (x + 5) (x + 6) = x6+21*x5+175*x4+735*x3+1629*x2+1769*x+720
(x + 1) (x + 2) (x - 3) (x – 4) (x + 7) (x – 60) = x6-47*x5-855*x4+4403*x3+6218*x2-23472*x-24480
(x + 4) (x + 2) (x - 2) (x – 1) (x – 2) (x – 3) = x6-75*x5+823*x4+2127*x3-14288*x2-7308*x+43920
(x - .1) (x - .2) (x - .3) (x - .31) (x -.9) (x – 1) = x6-75*x5+823*x4+2127*x3-14288*x2-7308*x+43920
(x – 1) (x – 2) (x – 3) (x – 4 ) (x – 5) (x – 6) (x – 7) = x7-28* x6+322*x5-1960*x4+6769*x3+13132**x2+13068*x-5090
(x +1) (x + 2)(x – 13) (x + 4) (x – 5) (x – 6) (x – 7) = x7-24* x6+138*x5+360*x4_3951*x3_576*x2+25412*x+21840
(x + 3) (x - 2) (x - 3) (x + 2) (x – 5) (x – 6) (x + 5) = x7-6* x6-38*x5+228*x4+361*x3-2166*x2-900*x+5400
(x - 1) (x - 2) (x + 1) (x - 4) (x + 7) (x +9) (x + 3) = x7+13*x6+9*x5-338*x4-251x3+1837x2+246x-1512
(x + 2) (x + 1) (x + 3) (x + 4) (x + 5) (x + 6) (x+7) = x7+28*x6+322*x5+1960*x4+6769*x3+13132*x2+13068*x+5040
(x + 1) (x +2) (x -3) (x – 4) (x +17) (x –60) (x+5) = x7-42*x6-1090*x5+128*x4+28233*x3+7618*x2-141890*x-122900
(x + 1) (x +2) (x -3) (x - 4) (x +17) (x -6) (x+5) = x7+12*x6-118*x5-412*x4+2745*x3+2920*x2-12348*x-12240
(x - 7) (x + 2) (x – 3) (x -2) (x + 4) (x -5) (x– 46) = x7-57*x6+513*x5-99*x4-10722*x3+20628*x2+34616*x-77280
(x - 2) (x + 2) (x - 3) (x + 4) (x -5) (x – 6) (x – 7) = x7-17*x5+73*x5+181*x4-1802*x3+2068*x2+5976*x-10080
(x + 1) (x + 2) (x +3) (x +9) (x +0) (x +1) (x +9) = x7+40*x6+634*x5+508-*x4+21889*x3+50320*x2+56676*x+23760
(x -1)(x-2)(x-3)(x-4)(x-5)(x-6)(x-7)(x-8) = x8-36*x7+546*x6-4536*x5+22449*x4-67284*x3+118124*x2-109584*x+40320
(x+1)(x+2)(x-13)(x+4)(x-5)(x-6)(x-7)(x-1) = x8-25*x7+162*x6+222*x5-4311*x4+3375*x3+25988*x2-3572*x-21840
(x+3)(x-2)(x-3)(x+2)(x-5)(x+5)(x-6)(x+1) = x8-5*x7-44*x6+190*x5+589*x4-1805*x3-3066*x2+4500*x+5400
(x-1)(x-2)(x+1)(x-9)(x+7)(x+9)(x+3)(x+5) = x8+18*x7+69*x6-318*x5-1941*x4+582*x3+9431*x2282*x-7560
(x+1)(x+2)(x+3)(x+4)(x+5)(x+6)(x+7)(x+8) = x8+36*x7+546*x6+4536*x5+22449*x4+67284*x3+118124*x2+109584*x+40320
(x+1)(x+2)(x-3)(x-4)(x+17)(x-6)(x+5)(x+8) = x8+20*x7-22*x6-1356*x5-551*x4+24880*x3+11012*x2-111024*x-97920