The modern scientific software stack includes thousands of packages, from C, C++, and Fortran libraries, to packages written in interpreted languages like Python and R. HPC applications may depend on hundreds of packages spanning all of these ecosystems. To achieve high performance, they must also leverage low-level and difficult-to-build libraries such as MPI, BLAS, and LAPACK. Integrating this stack is extremely challenging. The complexity can be an obstacle to deployment and deters developers from building on each others’ work.
Spack can help! Spack is an open source package manager that simplifies building, installing, customizing, and sharing HPC software stacks. In recent years, Its adoption has grown rapidly: by end-users, by HPC developers, and by the world’s largest HPC centers. Spack is also used to build reproducible scientific workflows in AWS.
More on Spack can be found at:
Spack provides a flexible dependency model, a simple Python syntax for writing package build recipes, and a repository of over 4,000 community-maintained packages. This tutorial provides a thorough introduction to Spack’s capabilities: installing and authoring packages, integrating Spack with development workflows, and using Spack for deployment in the cloud. Attendees will leave with foundational skills for using Spack to automate day-to-day tasks, along with deeper knowledge for applying Spack to advanced use cases.
Amazon Web Services (AWS) provides scalable, on-demand compute capabilities, in this tutorial we’ll cover how to get a cluster up and running in 15 minutes using AWS ParallelCluster, how to install applications using Spack so they’re optimized for the cloud environment. No previous AWS experience is required however experience with a command line interface is assumed.