-
Notifications
You must be signed in to change notification settings - Fork 660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core feature] Decouple submitterPod resources from ray task pod_template #5666
Comments
Thank you for opening your first issue here! 🛠 |
To decouple the submitterPod resources from the Ray task pod_template, you can configure the RayCluster head, workers, and submitter pod separately. This approach allows you to allocate minimal resources to the submitter while providing the necessary resources for the RayCluster head and workers. According to Flyte Issue #4377 and Flyte Issue #4674, you can specify different configurations for the head node and worker nodes using Here's an example configuration that demonstrates how to set up different resources for the Ray head, workers, and submitter pod: from flytekit import task, Resources
from flytekitplugins.ray import RayJobConfig, HeadNodeConfig, WorkerNodeConfig
ray_config = RayJobConfig(
head_node_config=HeadNodeConfig(
requests=Resources(mem="64Gi", cpu="4"),
limits=Resources(mem="64Gi", cpu="4"),
pod_template_name="ray_head_node"
),
worker_node_config=[
WorkerNodeConfig(
group_name="cpu-group",
replicas=4,
requests=Resources(mem="256Gi", cpu="64"),
limits=Resources(mem="256Gi", cpu="64"),
pod_template_name="ray_worker_cpu"
),
WorkerNodeConfig(
group_name="gpu-group",
replicas=2,
requests=Resources(mem="480Gi", cpu="60", gpu="2"),
limits=Resources(mem="480Gi", cpu="60", gpu="2"),
pod_template_name="ray_worker_gpu"
)
],
submitter_pod_config=PodConfig(
requests=Resources(mem="2Gi", cpu="1"),
limits=Resources(mem="2Gi", cpu="1"),
pod_template_name="ray_submitter"
)
)
@task(task_config=ray_config)
def ray_task():
# Task implementation
pass Would you like more details on any specific part of this configuration? Sources:
This is an AI-generated response and your feedback is appreciated! Please leave a 👍 if this is helpful and 👎 if it is not. Mention @RunLLM to ask follow-up questions. See runllm.com for more! |
@jpoler would you be open to contributing? |
@pingsutw mind if I take this issue of your hands? I'm working on Flyte + Ray at work and we'll need this change. |
How about using a subset of TaskExecutionMetadata, instead of just resources? |
We ended up adding support for plumbing the whole pod spec which I think will be sufficient. |
The flytepropeller and flytekit changes have landed. I think we're just waiting for a flytekit release at this point which should come in December hopefully. |
Motivation: Why do you think this is important?
Currently the ray plugin uses the pod_template provided to the task as the basis for all pod specs:
This is a pain point when the RayCluster head and workers are intended to be scheduled on GPU nodes. I do not want to waste an entire GPU node for the submitter.
Goal: What should the final outcome look like, ideally?
It is not possible to configure RayCluster pod templates and the submitter pod template separately. If it were, it would be possible to schedule the submitter with appropriately minimal resource requests and leave out other configurations that have nothing to do with the submitter pod (for example in my use case only the ray head/worker need the GPU, shared memory volume mount, service account, etc.)
I found #4170, which looks like it was trying to address this issue, but it hasn't seen any progress since October 2023. At a high level the approach it takes makes sense to me, where the pod_template provided to the task configures the resources for the submitter job, and then the ray head/worker have new config fields to configure their resources explicitly. In my opinion this change looks like it is headed in the right direction, but would be improved with a slight adaptation where it allows for the user to provide the entire pod template alongside resources. Otherwise it won't be possible to do things on the ray head/worker like configure volume mounts and env vars, etc.
Describe alternatives you've considered
I don't see an alternative to adding separate config parameters for separate pod specs. It doesn't seem like a good idea to hard-code the submitter pod spec for minimal resource requests (e.g. just a small request/limit for CPU and memory), because there very well could be a use case where someone wants a GPU for the submitter. It wouldn't make a lot of sense to preclude that use-case IMO.
I do see this PR that adds a
Resource
config toPropose: Link/Inline OR Additional context
No response
Are you sure this issue hasn't been raised already?
Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: