This repository accopanies a Birds of a Feather session at Supercomputing Conference 2024.
Can Python Do for HPC What It Did for Machine Learning?
Wednesday, 20 November 2024 12:15pm - 1:15pm EST
Location B212
Python is now one of the most popular programming languages. In HPC, it has predominantly been used to coordinate coarse-grain library components or workflows. However, it is increasingly being used to develop and coordinate applications with dynamic finer-grain components that are challenging to map efficiently onto heterogeneous resources. In this BoF, we discuss this challenge and efforts to design Python-based HPC, production quality codes for HPC leadership platforms. We will discuss issues such as multithreading, GPU kernel development, task-based coordination on heterogeneous systems with a mix of CPUs and GPUs, inter-node interoperability, scalability, portability, and reproducibility.
- Introduction
- Lightening talks
- Parla: HPC tasks for shared-memory heterogeneous nodes in Python (Mattan Erez)
- PyCOMPSs support to HPC + AI workflows (Rosa M. Badia)
- PyKokkos: A Performance Portability Framework for Python (Milos Gligoric)
- Distributed Tasking in Python with Legion (Elliott Slaughter)
- cuPyNumeric, Zero code change scaling of NumPy code (Manolis Papadakis)
- Discussion (Q&A)
Below is an incomplete list of framework for developing HPC applications in Python and brief descriptions.
- Arkouda - A numpy/pandas inspired Python library backed by Chapel
- Charm4py - Charm++ programming model in Python
- CuPy - NumPy/SciPy-compatible Array Library for GPU-accelerated Computing with Python
- cuPyNumeric - Write NumPy, run automatically on clusters of CPUs and GPUs
- Dask - Easy parallel Python that does what you need
- DaCe - Data Centric Parallel Programming
- FlexFlow - Drop-in PyTorch, Keras, ONNX interface
- lbmpy - Run fast fluid simulations based on the lattice Boltzmann method in Python on CPUs and GPUs
- loopy - A code generator for array-based code in the OpenCL/CUDA execution model
- mpi4py - MPI for Python
- Numba - JIT compiler that translates a subset of Python and NumPy code into fast machine code
- Pallas - An extension to JAX that enables writing custom kernels for GPU and TPU
- Parla - A task-parallel programming library for Python
- Parsl - Productive parallel programming in Python
- PyCOMPS - Workflow orchestration in Python
- PyCUDA - Pythonic access to Nvidia's CUDA parallel computation API
- Pygion - A task-based framework for Python based on the Legion programming system
- PyKokkos - Framework for writing performance portable HPC kernels in Python
- PyOMP - OpenMP for Python in Numba for CPU/GPU parallel programming
- PyOpenCL - Lets you access GPUs and other massively parallel compute devices from Python
- pystencils - Run blazingly fast stencil codes on numpy arrays
- PyTorch - An open-source machine learning library based on the Torch library
- Ray - AI Compute Engine
- Taichi Lang - Imperative, parallel programming language for high-performance numerical computation
If you would like to make any addition, feel free to create a PR with suggested changes or email Milos Gligoric [email protected].