next up previous contents
Next: HPF and Other Data Up: An HPF Encyclopedia Previous: Tools

Subsections

Applications

The big question at this time is how easy is it to recode typical Fortran applications into HPF. One of the first steps is to rewrite the program into ``data parallel'' style. Then a precompiler such as the APR or VAST tools can propose data distribution directives, and output a starting HPF program. This program can then be compiled using an HPF compiler, or run through a source-to-source analyzer to be turned into a Fortran 77 program with calls to a message passing library. This program is then compiled by a node compiler, producing a SPMD program.

At least one experiment concluded that while compilers are able to get really decent speedup, ``supporting the conversion of programs into data-parallel programming style will be a challenge for language designers as well as compiler researchers'' [115].

Others [107] feel that layout decisions can also be complicated, although my own experience has so far been that these decisions are usually either arbitrary or obvious.

The rest of this section covers some issues that applications people face when dealing with HPF.

Irregular Problems

Irregular computations are computations that cannot be accurately characterized at compile time. Irregular data distributions are mappings between processors and data that cannot be known at compile time. Taken together, these comprise the category of ``irregular problems''.

This is an active area of research, monopolized by Joel Saltz et al. in the early 1990's. Saltz et al. [157] describe two new ideas which would enable HPF compilers to deal with irregular computations: programmer provides graph-theoretic information to the compiler via a mapping procedure, and compiler recognizes when it is possible to reuse previously computed results from inspectors.[*]

This has since been generalized to a more automatic procedure. It has been proved to be very effective in cases where the distribution, although unknown at compile time, does not change at runtime, or if the same pattern of references obtains more than once in the same compilation unit.

Hanxleden, Saltz and Kennedy [183,182] describe an approach to compiling irregular problems in data parallel languages (Hanxleden's thesis). A value based distribution in Fortran D means you take data into account when you do load balancing. It can eliminate some redundant communications [181]. Fonlupt et al. [69] is a good survey of application load balancing techniques, and has an example where local load balancing failed (so prefix sum is not always so good). Another approach to irregular problems in Fortran D and/or HPF is Perrin [154].

Sometimes irregularity arises from nested parallelism. Palmer, Prins, and Westfold [150] extend a technique of Blelloch's [27] called flattening to achieve efficient parallel apply operations on nested subsequences. In related work, Nyland, Chatterjee, and Prins demonstrate that irregular problems can be expressed as segmented operations, and segmented operations can be efficiently expressed in HPF.

Some Applications Written in HPF

1.
MOM, an ocean model code at Princeton, was as of February 1997 probably the largest HPF application in existence. They developers plan to tackle the atmospheric model next. It uses a 3D explicit grid-point model and is implemented in F77 and xHPF. It has 60,000 lines.
2.
De Sruler and Strumpen [63] describe their experiences with an earlier version of xHPF in three codes: matrix multiplication, Gaussian elimination with partial pivoting, and a finite volume discretization on a structured grid and a nested iterative solver.

3.
Chriss Hill at MIT is working with the MIT ocean model, which runs on a DEC 8400, and is implemented in HPF.
4.
Cosmology codes, for example that of Ostriker and Norman [148], have been and are being written in HPF. The team already has data parallel versions of their codes, because they were originally implemented on the Thinking Machines CM-5. While much of the code can be expressed in HPF, problems include efficient parallel scans, parallel sorts, and 3D FFT's.

5.
Glenn Luecke and James Coyle [129] attempted to code efficient implementations of LU, QR, and Cholesky Factorizations in HPF, but found that using an MPI program to call ScaLAPACK routines directly was far better in performance.

6.
http://www.npac.syr.edu/hpfa is the entry point for HPF Applications in the CRPC and NPAC collection. It links to a survey of typical algorithms, which is a list of templates of all sorts. It has kernels, which are short self-contained applications (40 of them by April 1995, although some of them were removed in 1997 as they no longer worked with newer HPF compilers). The NPAC site is an excellent gateway into a large number of HPF-related topics.

7.
David Presberg reported on several HPF applications at the Cornell Theory Center, http://www.tc.cornell.edu/Papers/presberg.jul96 .
8.
http://www.npac.syr.edu/users/haupt/bbh/PITT_CODES/intro/pitt.html describes a Characteristic Initial Value Problem template, which could be extended to handle ``Gravitational Wave Extraction''. Not only is this an HPF program, but it is a good example of program exposition.

9.
Gaussian elimination is one of the applications that has been coded in HPF [156].

10.
CM Fortran codes - As Meadows and Miles point out in [133], a very important source for HPF applications is the existing base of CM and MP Fortran codes, originally developed on the Connection Machine and the MasPar. This is a large body of existing data parallel applications, easy to port to HPF. The paper references 5 CM Fortran applications. One CM Fortran application is the N-body problem, implemented by several persons including Johnsson and Hu [98,99].

11.
Clark, Kennedy, and Scott at the University of Houston attempted to parallelize GROMOS, a molecular dynamics program. Started in 1992, they used Fortran D and ``PFortran'', a parallel Fortran from the University of Houston. Their experiences are summarized in [52,53] as a list of rules for would be data parallel programmers.

12.
Kremer and Ramé coded part of UTCOMP into Fortran D [115]. UTCOMP is a big oil reservoir simulation program. [115] is a fairly thorough discussion of coding one of the UTCOMP subroutines in HPF (the subroutine is question is DISPER). The precompiler used was Fortran D and the node compiler was iPSC Fortran. Amoco has an application called Falcon, derived from GCOMP, which was developed beginning in 1982.

13.
Nagurney's tariff modelling program was already in data parallel form, which is why we at Cornell were able to code it in HPF. We used the xHPF precompiler and the xlf node compiler [21]. In spring 1996, Cornell used this as a case study for introducing the PGHPF and XLHPF compilers to their SP2 users. See http://www.tc.cornell.edu/Edu/Talks/Tariff.Case.Study .
14.
The Data Parallel Fortran project at Harvard has a number of benchmarks, listed at http://www.deas.harvard.edu/csecse/research/dpf/root.html . These are written in CM Fortran for the most part, and would need porting to HPF.
15.
ECMWF is porting weather codes to HPF+. As of 1996, the European Centre for Medium Range Weather Forecasts, an international organization supported by 18 European states, was analyzing the requirements for parallelization of IFS and other weather codes on an HPF-like basis, and planned to migrate kernels of increasing complexity to HPF+, evaluating this approach with respect to the message passing approach. HPF+ is what the current Vienna HPF languages is being called (see below).

16.
The NAS benchmarks. APR has coded some of the NAS benchmarks and they can be found at infomall and downloaded from http://ftp.infomall.org/home/tenants/apr.html HPF-NPB 1.0 became available in 1996.

17.
David Klepacki used IS (Integer Sort) and FT (Fourier Transform) from this collection to compare HPF, MPI and PAMS, using pghpf and (probably) IBM's own MPI. PAMS is virtual memory software from Myrias. The finding was that HPF couldn't perform well on all-to-all communications, but this may be due to the form of the program used [110].

18.
http://www.digital.com/info/hpc/f90 has sample HPF programs that were processed by Digital's F90 compiler. They include SHALOW, CG, and a 3D finite difference solver.

19.
The ``Workshop on HPF for Real Applications'' [44] included a number of reports on application experiences.

20.
The PARKBENCH collection is a compiler test suite that contains the original GENESIS[*] suite plus some new codes. For an overview and history of this project, see A. C. Marshall's writeup [132]. Tom Haupt of Syracuse University put together a number of kernels for subset HPF, useful for testing most HPF compilers at that time. It has 16 applications, including an N-body code. In 1995, Ken Hawick took over this effort, followed by Hon Yau in 1996. As of the end of 1996, it appeared that Dr. Mark Baker of Portsmouth was going to take over guiding this effort. At the 1997 PARKBENCH meeting , the ParkBench HPF group met, with Baker, Koebel and Saini leading the discussions.

Ongoing Porting Projects

ESPRIT is supporting a long-term project to port a number of codes to HPF+. IFS (from ECMWF) was mentioned earlier. They are also doing PAM-CRASH (from ESI) and FIRE (from AVL). There is also an aeorodynamics code called AEROLOG. PHAROS is the HPF equivalent of Europort, in Europe. Beginning around January 1996, it ported four industrial applications to HPF, whereas Europort ported industrial applications to PVM and PARMACS. They are using NA Software, and Simulog tools. Their objective is to port industrial codes to HPF using existing tools, and that HPF will pay off according to the following criteria:

A List of Projects related to HPF in Europe is available at http://www.irisa.fr/EuroTools/SIG/HPF/HPF/HPF.html Another ongoing porting project is Quetzal's port of an Air Force EPIC code for solid dynamics modelling.


next up previous contents
Next: HPF and Other Data Up: An HPF Encyclopedia Previous: Tools
Donna Bergmark
2/18/1998