PARADIGM: A Parallelizing Compiler for Distributed Memory Message-Passing Multicomputers

Researcher: Prof. Prithviraj Banerjee,

Problem Description

Distributed memory message passing machines such as the IBM SP-2, the Intel Paragon, and the Thinking Machines CM-5 offer significant advantages over shared-memory multiprocessors in terms of cost and scalability. Unfortunately, to extract all that computational power from these machines, users have to write efficient software for them, which is an extremely laborious process. One major reason for this difficulty is the absence of a single global shared address space. As a result, the programmer himself has to manually distribute code and data on processors, and manage communication among tasks explicitly. Clearly, there is a need for efficient parallel programming support on these machines. The PARADIGM compiler project addresses that problem by developing an automated means to convert sequential programs, automatically parallelizing them by compiler dependence analysis, and compiling them for efficient execution on distributed memory machines.

Project Overview:

What sets the PARADIGM project apart from other compiler efforts for distributed-memory multicomputers is the broad range of research topics that are being addressed. Current research topics in the PARADIGM project have been focused in the following areas:
Automatic data partitioning
Compile-time estimation of communication costs
Compilation and communication generation
Data redistribution & storage representations
Synthesis of high-level communication
Support for irregular computations
Exploitation of task and data parallelism
Automatic support for multithreaded execution
Compiler assisted algorithm based fault tolerance
Compiler Support for Distributed Shared Memory

Research Results

The PARADIGM compiler is targeted for both structured and unstructured parallel numerical applications written in FORTRAN 77 and High Performance Fortran. The sequential FORTRAN programs for regular applications are automatically parallelized using a parallelizing compiler (Parafrase-2), and the PARADIGM compiler performs several compiler transformations on the program, and then generates efficient message passing FORTRAN code.

Numerous compiler optimizations such as loop bound reduction, mask extraction and elimination, message vectorization, message chaining, message aggregation are automatically performed by the PARADIGM compiler. In addition, the PARADIGM compiler is unique in its ability
(1) to perform automated data distribution for regular data structures;
(2) to support simultaneous exploitation of task and data parallelism;
(3) to exploit regularity within irregularity in iterative applications.
(4) to handle arbitrary block-cyclic data distributions and arbitrary data alignments.

The PARADIGM compiler is being evaluated on the IBM SP-2 and networks of workstations for a variety of regular and irregular applications written in Fortran. Ongoing work is to work on distributed shared memory machines such as the SGI Origin.

The work has been funded by the National Science Foundation.


Team Members:

The PARADIGM team currently includes:

Group Alumni:

Other Related Work:

High Performance Fortran Forum (HPFF)
Fortran D - Rice University
Fortran 90D - Syracuse University
Fortran M - Argonne National Lab
Fx - Carnegie Mellon University
SUIF - Stanford Univeristy
Vienna Fortran - University of Vienna
CHAOS - University of Maryland

Send any questions to Professor Banerjee