Please use this identifier to cite or link to this item:

High-performance algorithms for compile-time scheduling of parallel processors

Authors Kwok, Yu-Kwong
Issue Date 1997
Summary Scheduling and mapping of precedence-constrained task graphs to the processors is one of the most crucial problems in parallel and distributed computing and thus continues to spur a great deal of interest from the researchers. Due to the NP-completeness of the problem, a large portion of related work is based on heuristic approaches with the objective of finding good solutions within a reasonable amount of time. The major drawback with most of the existing heuristics, however, is that they are evaluated with small problems sizes and thus their scalability is unknown. As a result, they are not applicable in a real environment when presented with moderately large problems. Furthermore, most heuristics ignore the important details of the application and the target system, or do not perform well when these details are taken into account. In this research, we propose three scheduling algorithms for achieving the conflicting goals of high-performance and low time-complexity. In addition, we aim at making our algorithms scalable and applicable when used in a real environment. The novelty of our algorithms is their efficient scheduling strategies which yield better solutions without incurring a high complexity. The second novelty is that our algorithms are parallelized which further lowers their complexities. The first algorithm, called the Parallel Fast Assignment using Search Technique (PFAST) algorithm, is a linear-time algorithm and uses an effective neighborhood search strategy for finding a good solution in a short time. The PFAST algorithm constructs an initial solution using a fast heuristic and then refines the solution using a parallel random search. The second algorithm, called the Parullel Genetic Search (PGS) algorithm, employs a parallel genetic technique which is based on the premises that the recombinative nature of a genetic algorithm can potentially determine an optimal schedule. Using well-defined crossover and mutation operators, the PGS algorithm judiciously combines good building-blocks of potential solutions to move towards a better solution. The third algorithm, called the Parallel Bubble Scheduling and Allocafion (PBSA) algorithm, is applicable in a general distributed computing environment in that it takes into account more considerations such as link contention, heterogeneity of processors, and the network topology. The PBSA algorithm uses an efficient strategy for simultaneous scheduling of tasks and messages. The algorithm is parallelized by partitioning the task graph to smaller graphs, finding their sub-schedules concurrently, and then combining these sub-schedules. The proposed algorithms have been evaluated through extensive experimentations and yield consistently better performance when compared with numerous existing algorithms.
Note Thesis (Ph.D.)--Hong Kong University of Science and Technology, 1997
Language English
Format Thesis
Access View full-text via DOI
Files in this item:
File Description Size Format
th_redirect.html 341 B HTML