NASA Logo

NTRS

NTRS - NASA Technical Reports Server

Back to Results
The science of computing - Parallel computationAlthough parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
Document ID
19850060376
Acquisition Source
Legacy CDMS
Document Type
Reprint (Version printed in journal)
Authors
Denning, P. J.
(NASA Ames Research Center Moffett Field, CA, United States)
Date Acquired
August 12, 2013
Publication Date
August 1, 1985
Publication Information
Publication: American Scientist
Volume: 73
ISSN: 0003-0996
Subject Category
Computer Systems
Accession Number
85A42527
Distribution Limits
Public
Copyright
Other

Available Downloads

There are no available downloads for this record.
No Preview Available