NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
What Multilevel Parallel Programs do when you are not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMPWith the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.
Document ID
20040084584
Acquisition Source
Ames Research Center
Document Type
Conference Paper
Authors
Jost, Gabriele
(NASA Ames Research Center Moffett Field, CA, United States)
Labarta, Jesus
(Commission of the European Communities, Abingdon)
Gimenez, Judit
(Commission of the European Communities, Abingdon)
Date Acquired
September 7, 2013
Publication Date
May 17, 2004
Subject Category
Computer Programming And Software
Meeting Information
Meeting: Workshop on OpenMP Applications and Tools
Location: Houston, TX
Country: United States
Start Date: May 17, 2004
End Date: May 18, 2004
Funding Number(s)
CONTRACT_GRANT: NASA Order A-61812-D
CONTRACT_GRANT: TIC2001-0995-C02-01
CONTRACT_GRANT: DTTS59-9-D-00437
Distribution Limits
Public
Copyright
Public Use Permitted.
No Preview Available