TASK DESCRIPTIONS

Specific tasks in the MATCH project include: development of a hardware testbed consisting of COTS FPGAs, embedded processors, and DSP processors; development of a basic compiler for mapping a given MATLAB application on to this heterogeneous target; investigation of automated parallelization and mapping techniques; design and support of compiler directives; development of library functions and applications of interest to DOD; development of faster algorithms for compilation. The MATCH project consists of 8 research tasks. A brief description of each of the tasks, an estimate of the percentage of effort involved in each of the tasks towards the success of the overall project, and the progress made in each task at the end of this current quarter, is given below.

Task 1: Development of a Testbed (10% of total project)

- Development of a hardware testbed consisting of VME chassis, Motorola embedded boards, Transtech DSP boards, Annapolis Wildchild board, and FORCE 5V

Task 2: Implementation of Basic MATLAB Compiler (30% of total project)

- Implementation of a compiler that takes in MATLAB and generates C code for the embedded and DSP processors, and RTL VHDL code for the FPGA board, and use commercial C and VHDL compilers to generate the object code

Task 3: Automatic Parallelism and Mapping (15% of total project)

- Developing heuristics based on mixed integer linear programming to map a given dataflow graph of a MATLAB program on heterogeneous resources while optimizing resources under performance constraints, or optimizing performance under resource constraints.

Task 4: MATLAB Compiler Directives (10% of total effort)

- Developed directives to specify type, shape, size, precision, data distribution and alignment, task mapping, resource and time constraints

Task 5: Evaluation of Adaptive Applications (10% of total effort)

- Evaluating applications such as Space Time Adaptive Processing, Hyperspectral Image Processing on MATCH testbed by writing applications in MATLAB and manually converting to the various components of testbed in C and VHDL and measuring performance

Task 6: Library Development (15% of total effort)

- Identification and Implementations of Basic Primitives: FFT, FIR/IIR filtering, Matrix addition, multiplication operations
- Implementing functions on embedded, DSP and FPGA boards
- Characterizing performance for varying problem size, varying number of processors and FPGAs,, will be used by compiler

Task 7: Interactive Tools (5% of total effort)

- Develop tools for function composition and analysis
- Develop GUI tools for monitoring distributed adaptive applications

Task 8: Faster Algorithms for Compilation (5% of effort)

- Develop parallel/distributed algorithms for logic synthesis
- Develop parallel algorithms for placement and routing based on the VPR tools

The detailed descriptions in each task are given below.

We will develop a prototype hardware testbed for adaptive computing systems, which will consist of COTS FPGAs, embedded processors, and DSP processors. This will allow us to demonstrate our compiler on a real system with real applications.

We will develop a compiler which will parse in MATLAB programs, build a control and data flow graph (CDFG), perform static and dynamic type and size inferencing, and partition the CDFG among the various components of the target hardware automatically. We will subsequently generate C code for the embedded and DSP targets, and RTL VHDL code for the FPGA targets, using native C compilers to generate the object codes for the processors, and logic synthesis and place and route tools to generate the FPGA binaries. We will transfer this technology to Integrated Sensors Incorporated (ISI).

Since manual mapping of MATLAB applications onto the heterogeneous adaptive computing target is tedious, we will develop automated compiler techniques for identifying and extracting various levels of parallelism from MATLAB programs, for mapping the tasks to the different targets, and to determine a cost effective solution that will optimize performance under resource constraints or optimize resources under performance constraints. We will also transfer this technology to ISI.

The system will provide mechanisms using MATLAB directives to allow the programmer to indicate resource performance constraints (such as latency, throughput, code sizes, number of processors, or FPGAs), assertions (such as arithmetic precision, data types and data sizes), and hints for generating the code (such as data and task distributions, or use of specific implementations of library functions). The directives will allow a sophisticated DOD user to optimize the applications on the target adaptive computing system. We will also transfer this technology to ISI.

We will use two innovative techniques to speed up the compilation process, which is known to take a long time for adaptive computing systems. The compiler will make use of precompiled packages that will contain implementations of standard MATLAB libraries as well as application specific functions. User MATLAB programs will be composed out of the precompiled packages. We will use parallel algorithms for compilation of user defined functions to speed up the compilation process. As part of this we will develop parallel algorithms for behavioral synthesis, logic synthesis, technology mapping, partitioning, placement and routing of FPGAs.

While the MATLAB language consists of several hundred library functions, we will identify a subset to implement in the most efficient manner on the embedded processors, DSPs, and FPGAs. We will develop parameterized implementations of the functions that will be characterized by their performance/resource costs in order to drive the automated mapping tasks of our compiler. We will use some of the library functions developed by ISI as part of the RTExpress to help map some MATLAB functions to networks of SUN workstations.

Our research on the compiler will be driven by real applications of interest to DOD. We will develop the best manual implementation of selected applications such as STAP (Space-Time Adaptive Processing) from Air Force Research Lab, and the Hyperspectral application from NASA. We will subsequently compare the best manual approach with the automated compiler approach for these applications in terms of quality of results and the code development times.

*Updated by Prith Banerjee June 15, 1999
*

*
*

* *