-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try using contiguous rank to fix cuda_visible_devices #1926
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
cjnolet
added
improvement
Improvement / enhancement to an existing function
non-breaking
Non-breaking change
labels
Oct 24, 2023
/ok to test |
Can confirm that with this PR below issue (rapidsai/cugraph#3889) is resolved .
|
2 tasks
Thanks @VibhuJawa. |
With brach-23.12:
With PR:
|
cjnolet
approved these changes
Oct 25, 2023
/merge |
/merge |
cjnolet
approved these changes
Oct 25, 2023
seberg
added a commit
to seberg/raft
that referenced
this pull request
Oct 25, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original [PR]() mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Oct 25, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Oct 25, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Oct 25, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Nov 3, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Nov 14, 2023
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Feb 13, 2024
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Feb 13, 2024
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
seberg
added a commit
to seberg/raft
that referenced
this pull request
Feb 13, 2024
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
VibhuJawa
pushed a commit
to VibhuJawa/raft
that referenced
this pull request
Mar 14, 2024
This is a follow up on rapidsaigh-1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original rapidsaigh-1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: * Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. * At least raise an error, because if NCCL will end up raising the error it will be very confusing.
rapids-bot bot
pushed a commit
that referenced
this pull request
Mar 15, 2024
This PR is based on @seberg work in #1928 . From the PR: This is a follow up on #1926, since the rank sorting seemed a bit hard to understand. It does modify the logic in the sense that the host is now sorted by IP as a way to group based on it. But I don't really think that host sorting was ever a goal? If the goal is really about being deterministic, then this should be more (or at least clearer) deterministic about order of worker IPs. OTOH, if the NVML device order doesn't matter, we could just sort the workers directly. The original #1587 mentions: NCCL>1.11 expects a process with rank r to be mapped to r % num_gpus_per_node which is something that neither approach seems to quite assure, if such a requirement exists, I would want to do one of: Ensure we can guarantee this, but this requires initializing workers that are not involved in the operation. At least raise an error, because if NCCL will end up raising the error it will be very confusing. Authors: - Vibhu Jawa (https://github.com/VibhuJawa) - Sebastian Berg (https://github.com/seberg) Approvers: - Corey J. Nolet (https://github.com/cjnolet) URL: #2228
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
improvement
Improvement / enhancement to an existing function
non-breaking
Non-breaking change
python
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR attempts to solve rapidsai/cugraph#3889