Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove triton #1062

Merged
merged 11 commits into from
Mar 26, 2024
Merged

Remove triton #1062

merged 11 commits into from
Mar 26, 2024

Conversation

irenedea
Copy link
Contributor

No description provided.

@irenedea irenedea marked this pull request as ready for review March 26, 2024 00:54
@irenedea irenedea requested a review from a team as a code owner March 26, 2024 00:54
@irenedea irenedea changed the base branch from bump_version_v0.7.0 to main March 26, 2024 00:54
@irenedea irenedea marked this pull request as draft March 26, 2024 00:55
@irenedea irenedea changed the base branch from main to bump_version_v0.7.0 March 26, 2024 01:05
@irenedea irenedea marked this pull request as ready for review March 26, 2024 01:06
# Remove the peft, xentropy-cuda-lib and triton-pre-mlir dependencies as PyPI does not
# support direct installs. The error message for importing PEFT, FusedCrossEntropy,
# and flash_attn_triton gives instructions on how to install if a user tries to use it
# Remove the peft and xentropy-cuda-lib dependencies as PyPI does not
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

neither of these other dependencies exist anymore either :)

@irenedea irenedea merged commit 4c99952 into mosaicml:bump_version_v0.7.0 Mar 26, 2024
4 of 7 checks passed
irenedea added a commit that referenced this pull request Mar 26, 2024
* Bump version

* Remove triton (#1062)

* Remove github action workflows for version bumps

* Fix cpu test issues

* code quality

* Fix gpu tests

* Fix gpu tests nicely

* Remove z-loss (#1064)

* Remove prefix lm and denoising (#1065)

* Remove hf_prefix_lm

* Remove prefix_lm from mpt modeling

* Remove bidirectional mask

* Remove text denoising dataloading

* Remove adapt tokenizer

* Remove llama attention patch (#1066)

* Remove bidirectional mask in tests

* Fix test_hf_config_override with patch
KuuCi pushed a commit that referenced this pull request Apr 18, 2024
* Bump version

* Remove triton (#1062)

* Remove github action workflows for version bumps

* Fix cpu test issues

* code quality

* Fix gpu tests

* Fix gpu tests nicely

* Remove z-loss (#1064)

* Remove prefix lm and denoising (#1065)

* Remove hf_prefix_lm

* Remove prefix_lm from mpt modeling

* Remove bidirectional mask

* Remove text denoising dataloading

* Remove adapt tokenizer

* Remove llama attention patch (#1066)

* Remove bidirectional mask in tests

* Fix test_hf_config_override with patch
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants