Dagster with SLURM submissions on HPC #14168
Replies: 3 comments 1 reply
-
I'd also like to add that this would be a very important feature for my use case. We would like to use dagster to orchestrate HPC workflows that are scheduled using slurm. I appreciate that slurm integration might not be an incredibly common use case in the open source world, but I suspect that there are quite a lot of industry users that would be very interested in this. Given that dagster is VC backed and ultimately needs to make a profit, perhaps it would be reasonable to consider prioritizing a feature like this? |
Beta Was this translation helpful? Give feedback.
-
I agree. A lot of people use tools like snakemake and nextflow, but I'm pretty sure if Dagster provided plugins for common HPC schedulers like SLURM, PBS, Torque, etc. there would be a ton of support for that. |
Beta Was this translation helpful? Give feedback.
-
I'll preface this by saying that I'm familiar with workflow tools in general but not yet with dagster.
Most of the tasks I want to run are to be executed on one of a variety of HPC machines, each of which has a SLURM job scheduling system. Can dagster be used to execute tasks in this way? I searched the docs for "SLURM" and didn't get much back except the Dask executor. The Dask executor can indeed be used to submit various job submission scripts, but this is all done locally, right? In other words, one would have to be using dagster on the same machine in which you wish to submit the Dask-generated SLURM job, right? The thing that makes this seem potentially a bit messy is that if you have tasks that you want to run on several different SLURM-hosted machines, you'd have to SSH into each one to execute jobs.
Is there a better/cleaner way to achieve this goal?
Beta Was this translation helpful? Give feedback.
All reactions