next up previous contents
Next: Funding Up: RESEARCH ACTIVITIES Previous: RESEARCH ACTIVITIES

PARADIGM Compiler for Distributed Memory Message-Passing Multicomputers (P. Banerjee)

Distributed memory message passing machines such as the IBM SP-2, the Intel Paragon, and the Thinking Machines CM-5 offer significant advantages over shared-memory multiprocessors in terms of cost and scalability. Unfortunately, to extract all that computational power from these machines, users have to write efficient software for them, which is an extremely laborious process. One major reason for this difficulty is the absence of a single global shared address space. As a result, the programmer himself has to manually distribute code and data on processors, and manage communication among tasks explicitly. Clearly, there is a need for efficient parallel programming support on these machines. The PARADIGM compiler project addresses that problem by developing an automated means to convert sequential programs, automatically parallelizing them by compiler dependence analysis, and compiling them for efficient execution on distributed memory machines.

The PARADIGM compiler is targeted for both structured and unstructured parallel numerical applications written in FORTRAN 77 and High Performance Fortran. The sequential FORTRAN programs for regular applications are automatically parallelized using a parallelizing compiler (Parafrase-2), and the PARADIGM compiler performs several compiler transformations on the program, and then generates efficient message passing FORTRAN code.

Numerous compiler optimizations such as loop bound reduction, mask extraction and elimination, message vectorization, message chaining, message aggregation are automatically performed by the PARADIGM compiler. In addition, the PARADIGM compiler is unique in its ability (1) to perform automated data distribution for regular data structures, something that has to be conventionally specified as a compiler directive in High Performance FORTRAN; (2) Simultaneous exploitation of task and data parallelism; (3) Exploiting regularity within irregularity in iterative applications.

The PARADIGM compiler is being evaluated on the IBM SP-2, the Intel Paragon, the Thinking Machines CM-5, and networks of workstations for a variety of regular and irregular applications written in Fortran. Ongoing work is to work on distributed shared memory machines such as the SGI Origin 2000 and the HP/Convex Exemplar SP 2000 systems. Another future direction of the PARADIGM compiler is to develop techniques to automatically parallelize MATLAB programs.

More information about the PARADIGM project can be found at the following URL on the World Wide Web: http:/www.ece.nwu.edu/cpdc/Paradigm/Paradigm.html.

The following significant results were obtained this past year in this project.

Accomplishment 1

We have developed a compilation strategy for compiling general block, cyclic and block-cyclic data distributions for general DO loops. The method is based on Fourier Motzkin elimination. The compiler has been evaluated on several benchmark programs and compared with a commercial HPF compiler from PGI.

Accomplishment 2

For programs that require the data to be distributed in different distributions in different phases of execution, it is widely known that the best way to execute such programs efficiently is to allow for dynamic data distributions. In this research, we have developed a compilation method for performing automatic data distribution on programs requiring dynamic data redistributions. We have evaluated the compilation strategy on several real benchmark example programs on a couple of parallel machines.

Accomplishment 3

We have performed a detailed performance evaluation of message-driven parallel applications on both shared memory and distributed memory parallel machines. We have also developed a compilation strategy for generating message-driven programs automatically from regular single-program multiple data (SPMD) programs. We have obtained detailed characteristics of computation grain sizes, message sizes, load imbalances, and message locality in such parallel applications.




next up previous contents
Next: Funding Up: RESEARCH ACTIVITIES Previous: RESEARCH ACTIVITIES

CPDC Webmasters
Wed Dec 10 16:19:42 CST 1997