Skip to main content

Event Details

  • Wednesday, May 24, 2017
  • 16:35 - 17:05

Algorithmic Adaptations to Extreme Scale

We analyze the contributions of the PCCFD workshop talks to the agenda of exascale algorithms, as articulated in, for instance, the G-8 country exascale software roadmap. Instead of squeezing out flops – the traditional goal of algorithmic optimality, which once served as a reasonable proxy for all associated costs – algorithms must now squeeze synchronizations, memory, and data transfers, while extra flops on locally cached data represent only small costs in time and energy. After decades of programming model stability with bulk synchronous processing, new programming models and new algorithmic capabilities (to make forays into, e.g., data assimilation, inverse problems, and uncertainty quantification) must be co-designed with the hardware. We briefly recap the architectural constraints and application opportunities. We then mention two types of tasks, each of occupies a large portion of all scientific computing cycles: large dense symmetric/Hermitian systems (covariances, Hamiltonians, Hessians, Schur complements) and large sparse Poisson/Helmholtz systems (solids, fluids, electromagnetism, radiation diffusion, gravitation). We examine progress in porting solvers for these tasks to the hybrid distributed-shared programming environment, including the GPU and the MIC architectures that make up the cores of the top scientific systems “on the floor” and “on the books.”