||Efficiently scheduling parallel tasks onto the processors of a multiprocessor system is critical to achieve high performance. Given perfect information at compile time, a static scheduling strategy can produce an assignment of tasks to processors that ide-ally balances the load among the processors while minimizing the run-time scheduling overhead and the average memory referencing delay. Since perfect information is seldom available, however, dynamic scheduling strategies allow part or all of the scheduling task to be performed at run time when a large amount of useful information about tasks becomes available. In this thesis, we introduce a class of algorithms (i.e. Self-Ad-justing Dynamic Scheduling (SADS)) that undertake an on-line optimization strategy to dynamically compute partial schedules based on the loads of the other processors and the memory locality (affinity) of the tasks and the processors. The SADS algorithms dedicate a processor to perform scheduling. These techniques mask the over-head associated with the scheduling process by performing the scheduling process while the working processors are performing their previously assigned tasks. SADS algorithms explicitly control and allocate the duration of the scheduling process. These algorithms incrementally build a schedule of available tasks. Using this strategy the scheduling process can be interrupted at any time to assign a set of scheduled tasks to their corresponding processors. This thesis provides analytical and empirical analyses of the SADS family of scheduling algorithms. Our results show the strengths and weaknesses of the SADS algorithms with respect to other dynamic scheduling algorithms.