NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
Algorithms for parallel flow solvers on message passing architecturesThe purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Document ID
19950020168
Acquisition Source
Legacy CDMS
Document Type
Contractor Report (CR)
Authors
Vanderwijngaart, Rob F.
(MCAT Inst. San Jose, CA, United States)
Date Acquired
September 6, 2013
Publication Date
January 1, 1995
Subject Category
Fluid Mechanics And Heat Transfer
Report/Patent Number
NAS 1.26:197758
NASA-CR-197758
MCAT-95-15
Report Number: NAS 1.26:197758
Report Number: NASA-CR-197758
Report Number: MCAT-95-15
Accession Number
95N26588
Funding Number(s)
CONTRACT_GRANT: NCC2-752
Distribution Limits
Public
Copyright
Work of the US Gov. Public Use Permitted.
No Preview Available