-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Smaug support #212
Smaug support #212
Conversation
@@ -112,6 +131,7 @@ def moe_shard_gate_up_weight_scale(weight: relax.TensorStructInfo): | |||
|
|||
return { | |||
"shard_qkv": shard_qkv_weight_scale, | |||
"shard_qkv_bias": shard_bias, | |||
"shard_mlp_k": shard_k_weight_scale, | |||
"shard_o_proj_k": shard_k_weight_scale, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why the bias for output projection must not be sharded. Initially I sharded it as well but the result was incorrect. Then I remember that the 1D scale for output projection in FT quantization must not be sharded as well https://github.com/mlc-ai/mlc-llm/blob/main/mlc_llm/relax_model/commons.py#L316-L320. So I skipped the bias shard for output proj and it worked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the shading is done for the reduction dimension, bias doesn't need to be shared, instead, bias need to be after all reduce or divided by num_shards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick addition, @masahi!
It turned out the upstream MLC never supported multi-gpu for models that uses bias before / after attention. So I needed to define a sharding func for bias.
@sunggg @vinx13