Load balancing is how you achieve a parallelized task while attempting to shift appropriate task levels to the available processors for maximum performance.

Static balancing can become more complex when one processor is faster or one task will take less time (less processing steps/work). In some routines such as grid loops this may be fine since most of the processing is work size is pretty much the same for each task. For instance, in a routine one might have the fore-knowledge of a portion of the task that will take a while complimented with many little tasks. The master program then feeds the large task to one processor and the little tasks in sequence to the other processors. Knowing forehanded which tasks will take longer and preparing a ‘pre-planned’ route to handle the balance is a static balancing method because it does not shift the balance ‘on the fly’. This works fine when things processing load is known and work size for each ‘thread’ is known.

Dynamic load balance attempts to solve the inefficiencies of multiple processors and idle time differently. During program execution time a dynamic load program attempts to calculate how to distribute the code for maximum processing performance. Dynamic load programs might calculate possible processing times of routines or simply wait for each task to complete then send another one.This is not usually a preplanned load balance yet is often more adaptive.

Frequently these sorts of programs will create designs like processor driven activity where each sub task will request a new task when completed. This is a highly useful form yet it runs into new problems not experienced by static load routines. For instance, the master program can only answer one task request at a time causing a bottleneck. Solutions to this sort include splitting the master task handler or creating a distributed work pool. Where the static loads typically know the processing load of each task and formulate accordingly a dynamic load routine does not usually and it will typically be designed with the idea that it will not know. This is useful for routines that work on external data sources that may have unknown content with the possibility of working on the data based upon what is found.

Personally, I feel that a bit of both works best. I always plan to anticipate what the load/work size will be of the routines I will be tasking. I always pre-plan to attempt fore-hand optimizations when applicable. However, such logic seems as though it would make for much quicker and stronger adaptively when it comes to performing the task dynamically. In some cases there is not a large amount of need to pre-consider the tasks and simply let each task grab a new one when it is done works well (small job pool with fair sized process task work lengths for one). When I see a routine that does not factor anything before hand, I do not think of it as smarter more like ‘scrappy’. On the other hand when I see something that finds clever ways to deal with what it is about to do, I think that is ‘smart’.

In previous coding examples I provided both forms:

Static because it simply spreads the load in a pre-planned way:

for ( p = 0; p < 10; p++ )
send( *gfxContext, *blockSize, p * 48, p );

Dynamic because it grabs the next job on its own without much pre-planning which in the case of the quick sort routine worked well:

while() /* wait for a job */

{

if( !stackLock )

{

stackLock = true;

if( !pStart.empty() ) {

ssrt = (int)pStart.pop(); esrt = (int)pEnd.pop();

processFlag++; stackLock = false;

break; /* got a job now go do it */

}

stackLock = false;

}

}

Referrence

Wilkinson and Allen, B. M. (2005). Parallel Programming. Pearson Education, Inc.