Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[moe] support low level zero optim #4429

Merged
merged 5 commits into from
Aug 14, 2023
Merged

Conversation

oahzxl
Copy link
Contributor

@oahzxl oahzxl commented Aug 14, 2023

📌 Checklist before creating the PR

  • I have created an issue for this PR for traceability
  • The title follows the standard format: [doc/gemini/tensor/...]: A concise description
  • I have added relevant tags if possible for us to better distinguish different PRs

🚨 Issue number

Link this PR to your issue with words like fixed to automatically close the linked issue upon merge

e.g. fixed #1234, closed #1234, resolved #1234

📝 What does this PR do?

Summarize your work here.
if you have any plots/diagrams/screenshots/tables, please attach them here.
moe param should not be stored in working_groups because they have different parallel strategy. so we need to store them separately in param_groups instead of working_groups

💥 Checklist before requesting a review

  • I have linked my PR to an issue (instruction)
  • My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
  • I have performed a self-review of my code
  • I have added thorough tests.
  • I have added docstrings for all the functions/methods I implemented

⭐️ Do you enjoy contributing to Colossal-AI?

  • 🌝 Yes, I do.
  • 🌚 No, I don't.

Tell us more if you don't enjoy contributing to Colossal-AI.

@github-actions
Copy link
Contributor

The code coverage for the changed files is 91%.

Click me to view the complete report
Name                                           Stmts   Miss  Cover
------------------------------------------------------------------
colossalai/zero/low_level/low_level_optim.py     329     30    91%
------------------------------------------------------------------
TOTAL                                            329     30    91%

Copy link
Member

@ver217 ver217 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, can you create a moe param interface like dtensor? https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/tensor/d_tensor/api.py

@github-actions
Copy link
Contributor

The code coverage for the changed files is 71%.

Click me to view the complete report
Name                                             Stmts   Miss  Cover
--------------------------------------------------------------------
colossalai/engine/gradient_handler/__init__.py       6      0   100%
colossalai/nn/layer/moe/experts.py                 128    104    19%
colossalai/nn/layer/moe/moe_param.py                 5      1    80%
colossalai/zero/low_level/low_level_optim.py       330     30    91%
--------------------------------------------------------------------
TOTAL                                              469    135    71%

@oahzxl
Copy link
Contributor Author

oahzxl commented Aug 14, 2023

By the way, can you create a moe param interface like dtensor? https://github.com/hpcaitech/ColossalAI/blob/main/colossalai/tensor/d_tensor/api.py

done

@github-actions
Copy link
Contributor

The code coverage for the changed files is 71%.

Click me to view the complete report
Name                                             Stmts   Miss  Cover
--------------------------------------------------------------------
colossalai/engine/gradient_handler/__init__.py       6      0   100%
colossalai/nn/layer/moe/experts.py                 128    104    19%
colossalai/tensor/moe_tensor/api.py                  5      1    80%
colossalai/zero/low_level/low_level_optim.py       330     30    91%
--------------------------------------------------------------------
TOTAL                                              469    135    71%

@ver217 ver217 merged commit 769fde5 into hpcaitech:feature/moe Aug 14, 2023
6 checks passed
@oahzxl oahzxl deleted the optim branch August 16, 2023 03:45
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Sep 15, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Sep 15, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
oahzxl added a commit to oahzxl/ColossalAI that referenced this pull request Oct 26, 2023
* update optim

* update grad handler

* update moe param interface

* update doc

* move moe tensor
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants