Amazon cover image
Image from Amazon.com

Compiler Optimizations for Scalable Parallel Systems [electronic resource] : Languages, Compilation Techniques, and Run Time Systems /

Contributor(s): Material type: TextTextSeries: Lecture Notes in Computer Science ; 1808Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 2001Edition: 1st ed. 2001Description: XXVIII, 784 p. online resourceContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9783540454038
Subject(s): Additional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification:
  • 005.1 23
LOC classification:
  • QA76.758
Online resources:
Contents:
Languages -- High Performance Fortran 2.0 -- The Sisal Project: Real World Functional Programming -- HPC++ and the HPC++Lib Toolkit -- A Concurrency Abstraction Model for Avoiding Inheritance Anomaly in Object-Oriented Programs -- Analysis -- Loop Parallelization Algorithms -- Array Dataflow Analysis -- Interprocedural Analysis Based on Guarded Array Regions -- Automatic Array Privatization -- Communication Optimizations -- Optimal Tiling for Minimizing Communication in Distributed Shared-Memory Multiprocessors -- Communication-Free Partitioning of Nested Loops -- Solving Alignment Using Elementary Linear Algebra -- A Compilation Method for Communication-Efficient Partitioning of DOALL Loops -- Compiler Optimization of Dynamic Data Distributions for Distributed-Memory Multicomputers -- A Framework for Global Communication Analysis and Optimizations -- Tolerating Communication Latency through Dynamic Thread Invocation in a Multithreaded Architecture -- Code Generation -- Advanced Code Generation for High Performance Fortran -- Integer Lattice Based Methods for Local Address Generation for Block-Cyclic Distributions -- Task Parallelism, Dynamic Data Structures and Run Time Systems -- A Duplication Based Compile Time Scheduling Method for Task Parallelism -- SPMD Execution in the Presence of Dynamic Data Structures -- Supporting Dynamic Data Structures with Olden -- Runtime and Compiler Support for Irregular Computations.
In: Springer Nature eBookSummary: Scalable parallel systems or, more generally, distributed memory systems offer a challenging model of computing and pose fascinating problems regarding compiler optimization, ranging from language design to run time systems. Research in this area is foundational to many challenges from memory hierarchy optimizations to communication optimization. This unique, handbook-like monograph assesses the state of the art in the area in a systematic and comprehensive way. The 21 coherent chapters by leading researchers provide complete and competent coverage of all relevant aspects of compiler optimization for scalable parallel systems. The book is divided into five parts on languages, analysis, communication optimizations, code generation, and run time systems. This book will serve as a landmark source for education, information, and reference to students, practitioners, professionals, and researchers interested in updating their knowledge about or active in parallel computing.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Languages -- High Performance Fortran 2.0 -- The Sisal Project: Real World Functional Programming -- HPC++ and the HPC++Lib Toolkit -- A Concurrency Abstraction Model for Avoiding Inheritance Anomaly in Object-Oriented Programs -- Analysis -- Loop Parallelization Algorithms -- Array Dataflow Analysis -- Interprocedural Analysis Based on Guarded Array Regions -- Automatic Array Privatization -- Communication Optimizations -- Optimal Tiling for Minimizing Communication in Distributed Shared-Memory Multiprocessors -- Communication-Free Partitioning of Nested Loops -- Solving Alignment Using Elementary Linear Algebra -- A Compilation Method for Communication-Efficient Partitioning of DOALL Loops -- Compiler Optimization of Dynamic Data Distributions for Distributed-Memory Multicomputers -- A Framework for Global Communication Analysis and Optimizations -- Tolerating Communication Latency through Dynamic Thread Invocation in a Multithreaded Architecture -- Code Generation -- Advanced Code Generation for High Performance Fortran -- Integer Lattice Based Methods for Local Address Generation for Block-Cyclic Distributions -- Task Parallelism, Dynamic Data Structures and Run Time Systems -- A Duplication Based Compile Time Scheduling Method for Task Parallelism -- SPMD Execution in the Presence of Dynamic Data Structures -- Supporting Dynamic Data Structures with Olden -- Runtime and Compiler Support for Irregular Computations.

Scalable parallel systems or, more generally, distributed memory systems offer a challenging model of computing and pose fascinating problems regarding compiler optimization, ranging from language design to run time systems. Research in this area is foundational to many challenges from memory hierarchy optimizations to communication optimization. This unique, handbook-like monograph assesses the state of the art in the area in a systematic and comprehensive way. The 21 coherent chapters by leading researchers provide complete and competent coverage of all relevant aspects of compiler optimization for scalable parallel systems. The book is divided into five parts on languages, analysis, communication optimizations, code generation, and run time systems. This book will serve as a landmark source for education, information, and reference to students, practitioners, professionals, and researchers interested in updating their knowledge about or active in parallel computing.

There are no comments on this title.

to post a comment.
© 2024 IIIT-Delhi, library@iiitd.ac.in