Chapter 15 - Introduction to Problem Solving Methods

15.1 Greedy Method

15.2 Divide-and-Conquer

15.3 Backtracking

15.4 Dynamic Programming

15.5 Branch-and-Bound

Chapter 15 - Introduction to Problem Solving Methods

In this chapter we review the major problem solving methods used in the design of programs. These methods are more general than any particular algorithm and are the basis of most applied numerical and semi-numerical algorithms.

15.1 Greedy Method

The greedy method is a problem solving approach that makes the best immediate (i.e. locally optimal) decision at each step toward the overall solution. When this approach results in the best possible answer to the problem, we say that the problem is amenable to the greedy method. An example of a problem for which the greedy method can give the best answer (i.e. optimal solution) is the coin changing problem.

Coin Changing

The coin changing problem is to determine the minimum number of coins required to equal a specified amount of change. There is a greedy optimal solution for the coin denominations currently minted by the U.S. government, namely 50, 25, 10, 5 and 1 cents.

The greedy coin changing algorithm states, always deduct the largest coin denomination that is no greater than the remaining change value until the remaining value is zero. A step algorithm for this technique is,

Step 1: Set remaining_value = input change value.

Step 2: Find the largest coin denomination that is remaining_value.

Step 3: Deduct the coin value found in Step 2 from remainin_value and increment the coin count for the corresponding coin denomination. If remaining_value is > 0 then return to Step 2 else continue.

Step 4: Display coin counts.

We can implement a more efficient algorithm for coin changing by using integer division and the mod function, as shown in the code segment below.

coin_set : array(1..5) of integer := (50,25,10,5,1);

num_coins : array(1..5) of integer := (0,0,0,0,0);

get(remval);

for i in 1..5 loop

num_coins(i):=remval/coin_set(i);

remval:=remval mod coin_set(i);

end loop;

So, for a change amount of 62 cents we would obtain a coin count of (1,0,1,0,2) for coin denominations of (50,25,10,5,1) for a total of 4 coins. This is the minimum number of coins that will total 62 cents.

It is interesting to note that this coin changing algorithm does not result in the minimum number of coins for all coin sets. For example, consider the same change amount of 62 cents with the coin denominations (50,26,10,5,1). Here, we have replaced the quarter with a 26-cent piece. The greedy algorithm gives us a coin count of (1,0,1,0,2) or 4 coins, but we can make the same change value with only three coins (0,2,1,0,0). As shown by this simple example, we have to be careful about our claims of optimality with respect to our computer algorithms.

When the greedy method works it is the preferred problem solving approach because of its efficiency. Since it only looks at immediately available data it never takes time to compute alternatives. To get a better appreciation for the advantage of the greedy method, consider an alternative algorithm for coin changing that would give the minimum number of coins for any coin set.

Minimal Spanning Tree

Another example problem amenable to a greedy solution is the minimal spanning tree problem. In this problem you are to find the minimum weight tree embedded in a weighted graph that includes all the vertices.

Figure 15-1: Sample Weighted Graph and Associated Adjacency Matrix

The answer to the minimal spanning tree problem is a subset of edges from the weighted graph. There are a number of algoirthm that use the greedy method to find the subset of edges that span the nodes in the graph and whose total weight is a minimum. This minimal spanning subset of edges will always be a tree. Why?

To implement an algorithm for the minimal spanning tree problem we need a data-representation for weighted graphs. For human analysis, the graphical representation is sufficient, but for computer analysis an edge list or adjacency matrix is preferred.

Prims Algorthm

Given a weighted graph G consisting of a set of vertices V and a set of edges E with weights,

where wk is the weight associated with the edge connecting vertex vi and vj. Prepare a vertex set and an edge set to hold elements selected by Prim's Algorithm.

Step 1: Choose an arbitrary starting vertex vj

Step 2: Find the smallest weight edge e incident with with a vertex in the vertex set whose inclusion in the edge set does not create a cycle.

Step 3: Include this edge in the edge set and its vertices in the vertex set.

Step 4: Repeat Steps 2 and 3 until all vertices are in the vertex set.

The edge set is the set of edges that have been chosen as member of the minimal spanning tree. The vertex set is the set of vertices that are incident with a selected edge in the edge set. In Prim's Algorithm the test for cycles is simple since we only need to ensure both vertices incident with the new edge are not already in the vertex set. As shown in Figure 15-2 we may be forced to choose from two or more equal weight edges.

Figure 15-2: Sample Execution of Prim's Algorithm for Minimal Spanning Tree

In the example above, the partial solution shows that we may select either edge E-F or edge H-F to be added to the edge set. In either case the added weight will be 3. This is why we refer to the solution as the minimal spanning tree rather than the minimum spanning tree. Figure 15-2 also shows a complete solution whose total weight is 14. A bit of visual analysis of this graph (staring at it for a while) should convince you that the embedded tree is a minimal spanning tree.

15.2 Divide-and-Conquer

The divide-and-conquer method involves dividing a problem into smaller problems that can be more easily solved. While the specifics vary from one application to another, building a divide-and-conquer solution involves the implementation of the following three steps in some form:

Divide - Typically this steps involves splitting one problem into two problems of approximately 1/2 the size of the original problem.

Conquer - The divide step is repeated (usually recursively) until individual problem sizes are small enough to be solved (conquered) directly.

Recombine - The solution to the original problem is obtained by combining all the solutions to the sub-problems.

Divide and conquer works best with the size of the sub-problems are reduced in size by an amount proportional to the number of sub-problems being generated. When the reduction in problem size is small compared to the number of sub-problems divide-and-conquer should be avoided. Therefore, we should avoid divide-and-conquer in the following cases:

An instance of size n is divided into two or more instances each of nearly n in size.

or

An instance of size n is divided into nearly n instances of size n/c (c is some constant).

To see why this is so, lets look at a graphical representation of the problem spaces for two cases of divide-and-conquer.

Case 1: Starting with a problem of size n, divide-and-conquer divides each instance of size k into two instances of size k/2 until k=1.

Figure 15-3: Problem Space for an Efficient Implementation of Divide-and-Conquer

The first instance (size n) gets divided into two instances of size n/2 each. Then each of these gets divided into two instances of size n/4 (a total of 4 of these). Then D&C generates 8 instances of size n/8. At the kth stage D&C will have generated 2k instances of size n/2k. The binary tree below represents the problem space for this D&C example.

As we have seen before the depth of this tree is log2n. The complexity of an algorithm exhibiting this behavior is proportional to the number of nodes in the problem-space tree. How many nodes are there in a complete binary tree of depth k? A binary tree of depth=k has 2k-1 nodes. In this case we know that k=log2n so the number of nodes in the problem-space tree is 2log n - 1. But 2log n = n so the order of run time of this D&C algorithm is O(n).

Case 2: D&C divides each instance of size n into two instances of size n-1 until n=1.

Now the first instance (size n) gets divided into to instances of size n-1. The number of instances doubles at each stage just as in Case 1, but now the depth, k of the problem-space tree is determined by n-k = 1, or k = n-1. The total number of nodes in this tree is 2n-1 -1. Therefore the order of run time of this D&C example is O(2n).

Figure 15-4: Problem Space for an Inefficient Implementation of Divide-and-Conquer

In Case 1 D&C is shown to be of linear order and therefore is an effective problem solving method. Applying D&C in Case 2 results in exponential order so it would be better to choose another method for this problem class.

Recursive Binary Search

The searching problems referenced in Chapter 13 were for both unordered and ordered lists. We determined that, at most, n comparisons are needed to search for a particular value in an unordered list of size n. For an ordered list, we presented an algorithm, called binary search, that did not have to examine every value in the list to determine if a particular value was present. We showed that the order of run time for binary search was O(log2n).

Now we will apply the divide-and-conquer problem solving method to the problem of searching an ordered list. The function location( ) shown below is recursive. If the index lo is greater than hi, the function returns 0 (i.e. this particular copy of location( ) did not find the value x). If the array value S(mid) = x, then the index mid is returned. If x is less than the value S(mid) the function calls itself with the index hi = mid-1. Finally if the value of x is greater than S(mid) then location( ) is called with the index lo = mid + 1.

Analysis of Recursive Binary Search

We will now introduce recurrence relations as a method of analyzing recursive functions. The recurrence relation T(n) below defines the amount of work done by the function location( ) in terms n (the input problem size). If location( ) is called with n=1 (the termination condition) then the amount of work performed (returning a 0) is constant which we label a C. Alternatively is n>1 then the function location( ) will perform some fixed number of operations plus it will call itself with a problem size of n/2. We will label the fixed amount of work done as D (another constant) and we will label the amount of work done by the additional call of location as T(n/2). So when n>1 we express the amount of work as T(n) = T(n/2) + D. In order to determine the complexity we have to find a representation for T(n/2) as an explicit function of n. This is accomplished as shown below.

There are a number of formal method for solving recurrence relations which we will not emphasize in this course. It is important that you become familiar with the replacement method shown above and, in particular, understand how to find the kth term for the recurrence. Notice that since we define T(1) = C, we will be trying to replace T(f(n)) with T(1), which means we are looking for the conditions under which f(n)=1. We then apply those conditions to the other terms in the recurrence relation.

15.3 Backtracking

Backtracking is a form of exhaustive search that is used to find any feasible solution. For example, we can use backtracking to solve a maze or to determine the arrangement of the squares in a 15-puzzle. In order to implement backtracking we need a way to represent all possible states of the problem or a way to generate all permutations of the potential solutions. The possible (candidate) solutions or states of the problem are referred to as the problems state space.

Backtracking is a modified depth-first search of the problem state-space . In the case of the maze the start location is analogous to the root of an embedded search tree in the state-space. This search tree has a node at each point in the maze where there is a choice of direction. The dead-ends and the finish position are leaf nodes of the search tree. The children of each node in the state-space tree represent the possible choices from that node, given the sequence of choices that lead to the node. In the maxe solving problem, the branches of the search tree are the allowed alternative paths in the state-space tree.

As shown in the example maze above the candidate solutions are the branches of a search tree. The correct (feasible) solution is shown in green. This search tree is represents the state space of the maze since is includes all reachable locations in the maze.

In backtracking we traverse the search tree of the state-space in a depth first order. Consider the state-space tree shown below. Moving down the left-hand branches first we can generate the sequence of nodes encountered in a depth-first order traversal of the tree. The nodes are numbered in the order of a depth-first traversal of the tree below.

N-Queens

The N-Queens Problem is a classic problem that is solvable using backtracking. In this problem you are to place queens (chess pieces) on an NxN chessboard in such a way that no two queens are directly attacking one another. That is, no two queens share the same row, column or diagonal on the board. In the examples below, we give two versions of backtracking as applied to this problem.

Version 1 - We solve the 4-Queens Problem below as an example of brute-force backtracking. Place 4 queens on a 4x4 board, in such a way that no two are in the same row, column or diagonal. First we label the 16 squares of the board with letters A through P. Until all queens are placed, choose the first available location and put the next queen in this position.

If queens remain to be placed and no space is left, backtrack by removing the last queens placed and placing it in the next available position. Once all choices have been exhausted for the most recent decision point, we backtrack to the next earlier decision point and choose the next available alternative.

Starting with position A we find that the next position that is not under attack by the first queen is G. In fact there are only 6 positions (G H J L N O) not under attack from position A. We place the second queen in position G and see that there is only 1 remaining square not under attack by one or both of these queens (position N). Once we choose N for the third queen we see that the fourh queen cannot be placed. So we backtrack to the most recent decision point and choose the next position for queen 2 (i.e. position H).

Continuing to traverse the state-space tree in a depth-first mode we find that queen two cannot be placed at position H, J or L either. In fact we eventually exhaust all the alternatives for queen 2 and backtrack to the first decision. We have shown that the first queen cannot be in position A

(i.e. a corner of the board).

So we choose the next available location for queen 1. Notice that all the constraints for placing a particular queen are based on the number and positions of the queens already placed. There is no restriction on where the first queen may be placed so the next available position is B. From B the six remaining positions are H I K M O P. We choose the first available position H for queen 2. This leaves only I and O postions not under attack so we choose I for queen 3. Since O is not under attack by any one of the first 3 queens we place queen 4 there, and the 4-Queens Problem has been solved.

Notice that we did not have to traverse a large portion of the state-space tree before a solution was found. This problem becomes much more difficult as the size of the board and the number of queens increases.

Version 2 - Some analysis of this problem shows that, since N queens must be placed on an NxN board, every row and column will have exactly one queen. That is, no two queens can share a row or column, otherwise they would be attacking each other. Using this simple observation we can redefine the state space of our algorithm.

We will associate each queen with 1 of n values representing the column for that queen. Now we only have to find a row number, i for each queen Qj (the queen of the jth column). The state-space for this version of the 4-Queens Problem is much simpler.

We have 4 choices for the queen of the first column, three choices for the queen of the second column, and so on. There are 4! = 24 nodes in the state-space tree for this version of the 4-Queens Problem.

Hamiltonian Circuit

A Hamiltonian circuit (also called a tour) of a graph is a path that starts at a given vertex, visits each vertex in the graph exactly once, and ends at the starting vertex. Some connected graphs do not contain Hamiltonian circuits and others do. We may use backtracking to discover if a particular connected graph has a Hamiltonian circuit.

Graph A Graph B

Embedding a Depth-First State-Space Tree in a Connected Graph

A method for searching the state space for this problem is defined as follows. Put the starting vertex at level 0 in the search tree, call this the zeroth vertex on the path (we may arbitrarily select any node as the starting node since we will be traversing all nodes in the circuit). At level 1, create a child node for the root node for each remaining vertex that is adjacent to the first vertex. At each node in level 2, create a child node for each of the adjacent vertices that are not in the path from the root to this vertex, and so on.