Abstract

Contributed Talk - Splinter Computation

Friday, 25 September 2020, 14:28   (virtual room B)

HPC performance optimisation as tool for computational astrophysics: experiences and lessons learnt

Luigi Iapichino (1), Salvatore Cielo (1), Stephan Hachinger (1) and Fabio Baruffa (2)
(1): Leibniz Supercomputing Centre; (2): Intel Corporation

The complexity of modern and upcoming computing architectures poses severe challenges for code developers and forces them to expose the highest possible degree of parallelism in their applications, in order to make the best use of the available hardware. Moreover, the analysis and management of huge amounts of data from large-scale simulations is a task just as demanding as the computation itself. In this talk, we report the experiences collected in the last years by the LRZ Applications Lab for Astrophysics and Plasma Physics (AstroLab) in the support of the computational astrophysics community with efficient use of modern, high-end HPC resources. Also, the growing need of making simulation data FAIR (Findable, Accessible, Interoperable, Reusable) and the LRZ strategy and services to address this point are discussed. Optimisation of codes like e.g. FLASH, Gadget and ECHO, among others, is shown, as well as approaches to massively parallel data visualisation (VisIt) and data analytics tools (yt). Although astrophysicists are among the biggest HPC users in science, our experience indicates that most of the codes still need modernisation to fully utilise the resources of the novel architectures.