-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pin pytorch #570
pin pytorch #570
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Auto-approved 👍. This diff qualified for automatic approval and doesn't need follow up review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
06_gpu_and_ml/vllm_inference.py
Outdated
"torchvision==0.16.2", | ||
"torchaudio==2.1.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need these for this example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think we don't
06_gpu_and_ml/vllm_inference.py
Outdated
# Use the barebones hf-transfer package for maximum download speeds. No progress bar, but expect 700MB/s. | ||
.env({"HF_HUB_ENABLE_HF_TRANSFER": "1"}) | ||
.run_function( | ||
download_model_to_folder, | ||
secret=Secret.from_name("huggingface-secret"), | ||
secrets=[Secret.from_name("huggingface-secret")], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch, thanks.
06_gpu_and_ml/vllm_mixtral.py
Outdated
"torchvision==0.16.2", | ||
"torchaudio==2.1.2", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same Q as above
Argh sorry about the tests failing on |
done: #571 should be good to rebase |
4dc7a68
to
ba078e5
Compare
awesome thanks! |
Pin pytorch to previous version; latest version is causing issues with the nvidia cuda image.
Type of Change
Checklist
lambda-test: false
is added to example frontmatterpython_version
for the base image