Immunity Based Genetic Algorithm for Solving Quadratic Assignment Problem (QAP)

Chou-Yuan Lee 1,Zne-Jung Lee2 and Shun-Feng Su3

1 Dept. of Electrical Engineering,

National Taiwan University of Science and Technology

2Dept. of Information Management

Kang-Ning Junior College of Nursing

3Dept. of Electrical Engineering,

National Taiwan University of Science and Technology

Abstract:

In this paper, immunity based genetic algorithm is proposed to solve quadratic assignment problem (QAP). The QAP problem, known as NP-hard problem, is a combinatorial problem found in the optimal assignment of facilities to allocations. The proposed algorithm is to enhance the performance of genetic algorithms by embedded immune systems so as to have locally optimal offspring, and it is successfully applied to solve QAP. From our simulations for those tested problems, the proposed algorithm has the best performances when compared to other existing search algorithms.

Keywords: Genetic Algorithm, Immune Systems, Quadratic Assignment Problem, Optimization, Local Search.

1. Introduction

QAP is a combinatorial optimization problem found in many fields such as VLSI module placement, machine scheduling, optimal assignment of resources to tasks, and other areas of applications. Various methods such as separable convex objective functions and graph theory have been employed to solve this class of problems. But the computational complexity of these methods grows exponentially while the problem size increases. As a result, these methods are only practical for small-sized problems [1].

Genetic algorithms (GAs) or more generally, evolutionary algorithms [2] have been touted as a class of general-purpose search strategies for optimization problems. GAs use a population of solutions, from which, using recombination and selection strategies, better and better solutions can be produced. GAs can handle any kind of objective functions and any kind of constraints without much mathematical requirements about the optimization problems, and have been widely used as search algorithms in various applications. Various GAs have been proposed in the literature [2,13] and shown superior performances over other methods. As a consequence, GAs seemed to be nice approaches for solving QAP. However, GAs may cause certain degeneracy in search performance if their operators are not carefully designed [4,15].

Recently, GAs with local search have been considered as good alternatives for solving optimization problems [3,5~7]. Local search can explore the neighborhood in an attempt to enhance the cost of the solution in a local manner and find a better solution. The natural immune system is a very complex system with several mechanisms to defense against pathogenic organisms. However, the natural immune system is also a source of inspiration for solving optimization problems [4,8,9]. From an information processing perspective, the artificial immune systems are remarkable adaptive systems and can provide several important aspects in the field of computation [8]. When incorporated with evolutionary algorithms, artificial immune systems can improve the search ability during the evolutionary process. In our research, a specific designed artificial immune system is implemented for QAP to improve the local search efficiency of GAs. The general idea of the proposed algorithm is to combine the advantages of GAs, the ability to explore the search space, and that of artificial immune systems, the ability to quickly find good solutions within a small region of the search space. The proposed algorithm can demonstrated excellent performance in later simulations.

This paper is organized as follows. In section 2, QAP is introduced. In section 3, the proposed algorithm, immunity based genetic algorithms for solving QAP, is described. In section 4, the algorithm is employed to solve the examples of QAP, and the results are listed. The performance shows the superiority of our algorithm. Finally, section 5 concludes this paper.

2. The Problem Formulation

Generally, a quadratic assignment problem (QAP) is a problem of how to economically to assign N facilities to N locations with constraints. This problem is one of the classical NP-hard problems. Let two N´N matrices D = ( dik) and F = ( fjl ) be given, where N is the total number of the facilities or locations, dik is the distance between location i, and location k and fjl is the cost of material flow from facility j to facility l. Usually, the matrix D is called the distance matrix and F is the flow matrix. Then, the QAP can be formulated as the follows:

Min. Z = (1)

with the constraints:

i= 1,2,…,N

j= 1,2,…,N (2)

i, j = 1,2, …,N

where Xij= 1 indicates that facility i is assigned to location j; otherwise Xij= 0.

The QAP is a combinatorial optimization problem found in the optimal assignment of facilities to allocations. Various methods such as separable convex objective functions and graph theory have been employed to solve this class of problems [1]. But the computational complexity of these methods grows exponentially while the problem size increases. Recently, genetic algorithms have been employed to solve the QAP. In the literature [3,5], local search mechanisms are also found to be nice fine-tuning methods to improve the performance of QAP. Based on the above two-optimization ideas, in this study, an immunity based genetic algorithms is applied to QAP, and the results shown later indeed demonstrate superior performance for QAP.

3.  Genetic Algorithms and The Proposed Algorithm

Genetic algorithms (GAs) or more generally, evolutionary algorithms [12~14] have been touted as a class of general-purpose search strategies for optimization problems. GAs use a population of solutions, from which, using recombination and selection strategies, better and better solutions can be produced [5]. GAs can handle any kind of objective functions and any kind of constraints without much mathematical requirements about the optimization problems. When applying to optimization problems, genetic algorithms provide the advantages to perform global search and hybridize with domain-dependent heuristics for a specific problem [10,11]. Genetic algorithms start with a set of randomly selected chromosomes as the initial population that encodes a set of possible solutions. In GAs, variables of a problem are represented as genes in a chromosome, and the chromosomes are evaluated according to their cost values using some measures of profit or utility that we want to optimize. Recombination typically involves two genetic operators: crossover and mutation. The genetic operators alter the composition of genes to create new chromosomes called offspring. The selection operator is an artificial version of natural selection, a Darwinian survival of the fittest among populations, to create populations from generation to generation, and chromosomes with better-cost values have higher probabilities of being selected in the next generation. After several generations, GA can converge to the best solution. Let P(t) and C(t) are parents and offspring in generation t. A usual form of general GA is shown in the following [13]:

Procedure: General GA

Begin

t ← 0;

Initialize P(t);

Evaluate P(t);

While (not match the termination conditions) do

Recombine P(t) to yield C(t);

Evaluate C(t);

Select P(t+1) form P(t) and C(t);

t ← t+1;

End;

End;

Recently, genetic algorithms with local search have also been considered as good alternatives for solving optimization problems. The flow chart of the GA with local search is shown in Fig. 1, and general structure is shown in the following.

Procedure: GA with local search

Begin

t ← 0;

Initialize P(t);

Evaluate P(t);

While (not matched for the termination conditions) do

Apply crossover on P(t) to generate ;

Apply local search on to yield (t);

Apply mutation on (t) to yield (t);

Apply local search on (t) to yield (t);

Evaluate C(t)={, (t), (t), (t)};

Select P(t+1) from P(t) and C(t );

t ← t+1;

End;

End;

It is noted that the GA with local search becomes a general GA if the local search is omitted. This representation uses N numerical genes and each gene with an integer number form 1 to N to represent facility-location pairs. For crossover operator, the partially mapped crossover (PMX) can be traditionally implemented. The idea of the PMX crossover operator is to generate the mapping relations so that the offspring can be repaired accordingly and become feasible. The idea of the PMX operation is to generate the mapping relations so that the offspring can be repaired accordingly and become feasible. PMX has been showed effective in many applications, and its algorithm is stated as:

Step 1: Select substrings from parents at random.

Step 2: Exchange substrings between two parents to produce proto-offspring.

Step 3: Determine the mapping relationship from the exchanged substrings.

Step 4: Repair proto-offspring with the mapping relationship.

To see the procedure, an example is illustrated. Consider two chromosomes:

A= 1 2 3 4 5 6 7 8 9

B= 4 5 6 9 1 2 7 3 8

First, two positions, e.g., the 3rd and the 6th positions, are selected at random to define a substring in chromosomes and the defined substrings in those two chromosomes are then exchanged to generate the proto-offspring as:

A= 1 2 6 9 1 2 7 8 9

B= 4 5 3 4 5 6 7 3 8

Then the mapping relationship between those two

substrings can be established as:

Fig. 1. The flowchart of GA with local search approaches.

(A’s genes) (B’s genes)

It is noted that the mapping relations 6«3 and 2«6 have been merged into 2«6«3. Finally, the proto-offspring are repaired according to the above mapping lists. The resultant feasible offspring are:

A’= 5 3 6 9 1 2 7 8 4

B’= 9 1 3 4 5 6 7 2 8

Another new recombination operator proposed for the QAP preserves the information contained in both parents in sense that all alleles of the offspring are taken either from the first or from the second parent. The recombination operator is presently called information-contained crossover (ICX) [6]. The ICX works as follow:

Step 1: All facilities found at the same locations in the two parents are assigned to the corresponding locations in the offspring.

Step 2: Starting with a randomly chosen location that has no facility assigned yet, a facility is randomly chosen from the two parents. After that, additional assignments are made to ensure that no implicit mutation occurs. Then, the next onto-assigned location to the right is preceded in the same way until all locations have been considered.

A good crossover operator is called keep good-gene crossover (KGGX), The KGGX operator is to generate the better gene so that the offspring become better representation. Its algorithm is stated as:

Step 1: Select a gene a from parent A at random.

Step 2: Select a gene b from parent B at random.

Step 3: The same gene b from parent A is replaced as a, and original gene a is replaced as b.

Step 4: The same gene a from parent B is Replaced as b, and original gene b is replaced as a.

We use an example illustrate the procedure.

Consider two chromosomes:

A= 1 2 3 4 5 6 7 8 9

B= 1 3 8 9 2 4 5 6 7

First, the 2nd position of A is 2 and the 4th position of B is 9 are selected at random.

A= 1 2 3 4 5 6 7 8 9

B= 1 3 8 9 2 4 5 6 7

The 9th position of A is replaced as 2, and the original 2nd position is replaced as 9. The 5th position of B is replaced as 9, and the original 4th position is replaced as 2.

The resultant offspring are:

A= 1 9 3 4 5 6 7 8 2

B= 1 3 8 2 9 4 5 6 7

Then, the KGGX operator terminates since all genes have been replaced. The KGGX operator has better representation than ICX and PMX in our simulations.

The operator of mutation can be implemented as inverse mutation. The inversion mutation operation is stated as:

Step 1: Select two positions within a chromosome at random.

Step 2: Invert the substring between these two positions.

Consider a chromosome:

A=1 2 3 4 5 6 7 8 9.

In a mutation process, the 3rd and the 6th positions are randomly selected. If the mutation operation is performed, then the offspring becomes:

A’=1 2 6 5 4 3 7 8 9.

The used selection strategy is referred to as (u+λ)–ES (evolution strategy) survival [6], where u is the population size and λ is the number of offspring created. The process simply deletes redundant chromosomes and then retains the best u chromosomes in P(t+1). From the above algorithm, it can be seen that local search is performed for the chromosomes obtained by recombination and by mutation, respectively. The offspring set C(t) also includes those original chromosomes before local search to retain the genetic information obtained in the evolutionary process. Local search can explore the neighborhood in an attempt to enhance the cost of the solution in a local manner.

General local search starts from the current solution and repeatedly tries to improve the current solution by local changes. If a better solution is found, then it replaces the current solution and the algorithm searches from the new solution again. These steps are repeated until a criterion is satisfied. Thereafter, General local search can explore the neighborhood of a solution then to find a better solution. The procedure of general local search process is [13,14]:

Procedure: General Local Search

Begin

While(General local search has not been stopped) do

Generate a neighborhood solution ;

If < C (π) then π=;

End;

End;

For local search, simulated annealing (SA) is also employed as local search to take advantages of search strategies in which cost-deteriorating neighborhood solution may possibly be accepted in searching for the optimal solutions [19]. In other words, in addition to better cost neighbors are always accepted, worse cost neighbors may also be accepted according to a probability that is gradually decreased in the cooling process. The SA algorithm is implemented as follows [19].