Зарегистрироваться
Восстановить пароль
FAQ по входу

Pande S., Agrawal D.P. (eds.) Compiler Optimizations for Scalable Parallel Systems. Languages, Compilation Techniques, and Run Time Systems

  • Файл формата pdf
  • размером 4,71 МБ
  • Добавлен пользователем
  • Описание отредактировано
Pande S., Agrawal D.P. (eds.) Compiler Optimizations for Scalable Parallel Systems. Languages, Compilation Techniques, and Run Time Systems
Springer, 2001. — 783.
We are very pleased to publish this monograph on Compiler Optimizations for Scalable Distributed Memory Systems. Distributed memory systems offer a challenging model of computing and pose fascinating problems regarding compiler optimizations ranging from language design to run time systems. Thus, the research done in this area serves as foundational to many challenges from memory hierarchy optimizations to communication optimizations encountered in both stand-alone and distributed systems. It is with this motivation that we present a compendium of research done in this area in the form of this monograph.
This monograph is divided into five sections: section one deals with languages, section two deals with analysis, section three with communication optimizations, section four with code generation, and section five with run time systems. In the editorial we present a detailed summary of each of the chapters in these sections.
Section I: Languages
High Performance Fortran 2.0
The Sisal Project: Real World Functional Programming
HPC++ and the HPC++Lib Toolkit
A Concurrency Abstraction Model for Avoiding Inheritance Anomaly in Object-Oriented Programs
Section II: Analysis
Loop Parallelization Algorithms
Array Dataflow Analysis
Interprocedural Analysis Based on Guarded Array Regions
Automatic Array Privatization
Section III: Communication Optimizations
Optimal Tiling for Minimizing Communication in Distributed Shared-Memory Multiprocessors
Communication-Free Partitioning of Nested Loops
Solving Alignment Using Elementary Linear Algebra
A Compilation Method for Communication–Efficient Partitioning of DOALL Loops
Compiler Optimization of Dynamic Data Distributions for Distributed-Memory Multicomputers
A Framework for Global Communication Analysis and Optimizations
Tolerating Communication Latency through Dynamic Thread Invocation in a Multithreaded Architecture
Section IV: Code Generation
Advanced Code Generation for High Performance Fortran
Integer Lattice Based Methods for Local Address Generation for Block-Cyclic Distributions
Section V: Task Parallelism, Dynamic Data Structures and Run Time Systems
A Duplication Based Compile Time Scheduling Method for Task Parallelism
SPMD Execution in the Presence of Dynamic Data Structures
Supporting Dynamic Data Structures with Olden
Runtime and Compiler Support for Irregular Computations
  • Чтобы скачать этот файл зарегистрируйтесь и/или войдите на сайт используя форму сверху.
  • Регистрация