High Performance Computing
Programming and Applications
John Levesque ; Gene Wagenbreth
- Vår pris
- 961,-
(Paperback)
Fri frakt!
Leveringstid:
Sendes innen 21 dager
(Paperback)
Fri frakt!
Leveringstid:
Sendes innen 21 dager
Drawing on their experience with chips from AMD and systems, interconnects, and software from Cray Inc., the authors explore the problems that create bottlenecks in attaining good performance. They cover techniques that pertain to each of the three levels of parallelism:
Message passing between the nodes
Shared memory parallelism on the nodes or the multiple instruction, multiple data (MIMD) units on the accelerator
Vectorization on the inner level
After discussing architectural and software challenges, the book outlines a strategy for porting and optimizing an existing application to a large massively parallel processor (MPP) system. With a look toward the future, it also introduces the use of general purpose graphics processing units (GPGPUs) for carrying out HPC computations. A companion website at www.hybridmulticoreoptimization.com contains all the examples from the book, along with updated timing results on the latest released processors.
- FAKTA
-
Utgitt:
2018
Forlag: CRC Press
Innbinding: Paperback
Språk: Engelsk
Sider: 244
ISBN: 9781138372689
Format: 23 x 16 cm
- KATEGORIER:
- VURDERING
-
Gi vurdering
Les vurderinger
Multicore Architectures
MEMORY ARCHITECTURE
SSE INSTRUCTIONS
HARDWARE DESCRIBED IN THIS BOOK
The MPP: A Combination of Hardware and Software
TOPOLOGY OF THE INTERCONNECT
INTERCONNECT
CHARACTERISTICS
THE NETWORK INTERFACE COMPUTER
MEMORY MANAGEMENT FOR MESSAGES
HOW MULTICORES IMPACT THE PERFORMANCE
OF THE INTERCONNECT
How Compilers Optimize Programs
MEMORY ALLOCATION
MEMORY
ALIGNMENT
VECTORIZATION
PREFETCHING OPERANDS
LOOP UNROLLING
INTERPROCEDURAL ANALYSIS
COMPILER
SWITCHES
FORTRAN 2003 AND ITS INEFFICIENCIES
SCALAR OPTIMIZATIONS PERFORMED BY THE COMPILER
Parallel Programming Paradigms
HOW CORES COMMUNICATE WITH EACH OTHER
MESSAGE PASSING INTERFACE
USING OPENMP
POSIX THREADS
PARTITIONED GLOBAL ADDRESS SPACE LANGUAGES (PGAS)
COMPILERS FOR PGAS LANGUAGES
THE ROLE OF THE INTERCONNECT
A Strategy for Porting an Application to a Large MPP
System
GATHERING STATISTICS FOR A LARGE PARALLEL PROGRAM
Single Core Optimization
MEMORY ACCESSING
VECTORIZATION
SUMMARY
Parallelism across the Nodes
APPLICATIONS INVESTIGATED
LESLIE3D
PARALLEL OCEAN MODEL (POP)
SWIM
S3D
LOAD IMBALANCE
COMMUNICATION BOTTLENECKS
OPTIMIZATION OF INPUT AND OUTPUT (I/O)
Node Performance
APPLICATIONS INVESTIGATED
WUPWISE
SWIM
MGRID
APPLU
GALGEL
APSI
EQUAKE
FMA-3D
ART
AMMP
SUMMARY
Accelerators and Conclusion
ACCELERATORS
CONCLUSION
Appendix A: Common
Compiler Directives
Appendix B: Sample MPI Environment Variables
References
Index
Exercises appear at the end of each chapter.
For the past 40 years, Mr. Levesque has optimized scientific application programs for successful HPC systems. He is an expert in application tuning and compiler analysis of scientific applications.
Gene Wagenbreth is a senior system programmer in the Information Sciences Institute at the University of Southern California, where he is applying GPGPU technology in sparse matrix solvers, image tomography, and real-time computational fluid dynamics. He also presents courses on the use and programming of GPUs.
Since the 1970s, Mr. Wagenbreth has worked with most of the highest performance computers, including Cray models, other vector processors, hypercubes, and clusters. He has worked with shared and distributed memory computers using MPI, OpenMP, pthreads, and other techniques. He has also applied parallel processing in numerous fields, including seismic analysis, reservoir simulation, weather forecasting, and battlefield simulations.