Massively parallel (computing)

"Massively parallel" redirects here. For other uses, see Massively parallel (disambiguation).

In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).

In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.[2]

In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[3]

The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.[4] Examples of services and products which has a MPP implementation include Microsoft's Azure SQL Data Warehouse and Microsoft’s on premise data warehousing product, Parallel Data Warehouse (PDW), which runs on the Analytics Platform System (APS).

See also

References

  1. Grid computing: experiment management, tool integration, and scientific workflows by Radu Prodan, Thomas Fahringer 2007 ISBN 3-540-69261-4 pages 1–4
  2. Parallel and Distributed Computational Intelligence by Francisco Fernández de Vega 2010 ISBN 3-642-10674-9 pages 65–68
  3. Knight, Will: "IBM creates world's most powerful computer", NewScientist.com news service, June 2007
  4. http://s.top500.org/static/lists/2013/11/TOP500_201311_Poster.png


This article is issued from Wikipedia - version of the 9/3/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.