Meny
 
Parallel Programming for Modern High Performance Computing Systems - 
      Pawel Czarnul

Parallel Programming for Modern High Performance Computing Systems

In view of the growing presence and popularity of multicore and manycore processors, accelerators, and coprocessors, as well as clusters using such computing devices, the development of efficient parallel applications has become a key challenge to be able to exploit the performance of such systems. Les mer
Vår pris
1434,-

(Innbundet) Fri frakt!
Leveringstid: Sendes innen 21 dager

Innbundet
Legg i
Innbundet
Legg i
Vår pris: 1434,-

(Innbundet) Fri frakt!
Leveringstid: Sendes innen 21 dager

In view of the growing presence and popularity of multicore and manycore processors, accelerators, and coprocessors, as well as clusters using such computing devices, the development of efficient parallel applications has become a key challenge to be able to exploit the performance of such systems. This book covers the scope of parallel programming for modern high performance computing systems.


It first discusses selected and popular state-of-the-art computing devices and systems available today, These include multicore CPUs, manycore (co)processors, such as Intel Xeon Phi, accelerators, such as GPUs, and clusters, as well as programming models supported on these platforms.


It next introduces parallelization through important programming paradigms, such as master-slave, geometric Single Program Multiple Data (SPMD) and divide-and-conquer.


The practical and useful elements of the most popular and important APIs for programming parallel HPC systems are discussed, including MPI, OpenMP, Pthreads, CUDA, OpenCL, and OpenACC. It also demonstrates, through selected code listings, how selected APIs can be used to implement important programming paradigms. Furthermore, it shows how the codes can be compiled and executed in a Linux environment.


The book also presents hybrid codes that integrate selected APIs for potentially multi-level parallelization and utilization of heterogeneous resources, and it shows how to use modern elements of these APIs. Selected optimization techniques are also included, such as overlapping communication and computations implemented using various APIs.


Features:








Discusses the popular and currently available computing devices and cluster systems







Includes typical paradigms used in parallel programs







Explores popular APIs for programming parallel applications







Provides code templates that can be used for implementation of paradigms







Provides hybrid code examples allowing multi-level parallelization







Covers the optimization of parallel programs
FAKTA
Utgitt:
Forlag: CRC Press
Innbinding: Innbundet
Språk: Engelsk
Sider: 304
ISBN: 9781138305953
Format: 24 x 16 cm
KATEGORIER:

Bla i alle kategorier

VURDERING
Gi vurdering
Les vurderinger
1. Understanding the Need for Parallel Computing
1.1 Introduction
1.2 From Problem to Parallel Solution - Development Steps
1.3 Approaches to Parallelization
1.4 Selected Use Cases with Popular APIS
1.5 Outline of The Book





2. Overview of Selected Parallel and Distributed Systems for High Performance Computing
2.1 Generic Taxonomy of Parallel Computing Systems
2.2 Multicore CPUS
2.3 GPUS
2.4 Manycore CPUS/Coprocessors
2.5 Cluster Systems
2.6 Growth of High Performance Computing Systems and Relevant Metrics
2.7 Volunteer-based Systems
2.8 Grid Systems





3. Typical Paradigms for Parallel Applications
3.1 Aspects of Parallelization
3.2 Masterslave
3.3 SPMD/Geometric Parallelism
3.4 Pipelining
3.5 Divide and conquer





4. Selected APIs for Parallel Programming
4.1 Message Passing Interface (MPI)
4.2 OPENMP
4.3 PTHREADS
4.4 CUDA
4.5 OPENCL
4.6 OPENACC
4.7 Selected Hybrid Approaches





5. Programming Parallel Paradigms Using Selected APIS
5.1 Masterslave
5.2 Geometric SPMD
5.3 Divide and conquer





6. Optimization Techniques and Best Practices for Parallel Codes
6.1 Data Prefetching, Communication and Computations Overlapping and Increasing Computation Efficiency
6.2 Data Granularity
6.3 Minimization of Overheads
6.4 Process/Thread Affinity
6.5 Data Types and Accuracy
6.6 Data Organization and Arrangement
6.7 Checkpointing
6.8 Simulation of Parallel Application Execution
6.9 Best Practices and Typical Optimizations





Appendix A. Resources
A.1 Software Packages





Appendix B. Further reading
B.1 Context of this Book
B.2 Other Resources on Parallel Programming
Pawel Czanul