-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for AdapterPlus #746
Merged
Merged
Changes from 16 commits
Commits
Show all changes
35 commits
Select commit
Hold shift + click to select a range
abeca7e
initial commit/added new scaling option channel
julian-fong 20f8eee
added the houlsby initialization
julian-fong f42a855
add drop_path to adapterplus config
julian-fong 54551ab
added drop_path implementation
julian-fong 8f741f6
fixed typo inside AdapterPlusConfig and added DropPath inside __init_…
julian-fong 3845c76
reverted pre-commit changes in adapter_config.py
julian-fong d31f9ad
Update adapter_config.py
julian-fong c92b390
Update adapter_config.py
julian-fong b6a43d7
revert pre-commit updates to modeling.py
julian-fong cc77959
Update modeling.py
julian-fong 9437d5b
added config to init file
julian-fong 009b2fe
update adapter_config.py
julian-fong 0660776
fixed StochasticDepth
julian-fong ca9905b
update Adapter class
julian-fong 25b1488
made docstring consistent
julian-fong 9c25050
fixed bug with DropPath in forward function
julian-fong 79dd694
removed vision.py and added torchvision implementation of stochastic …
julian-fong 9166624
updated reduction_factor to 96 to that we get a rank of 8 with ViT mo…
julian-fong 169b303
updated __init__ file
julian-fong 0f54dae
updated documentation
julian-fong b484947
Merge branch 'adapter-hub:main' into adapterplus
julian-fong 49bb668
Merge branch 'adapterplus' of github.com:julian-fong/adapters into ad…
julian-fong d899b50
added torchvision as an optional dependency, and added torchvision to…
julian-fong 2452306
update
julian-fong dea8830
updates
julian-fong e6d6fa2
Merge branch 'main' into adapterplus
julian-fong a7f7705
updates
julian-fong e99304b
Merge branch 'adapterplus' of github.com:julian-fong/adapters into ad…
julian-fong 89ff2b4
updates
julian-fong 66093ec
re-added new optional dependency torchvision
julian-fong 21f8aad
fixed typo
julian-fong b770279
added notebook
julian-fong ef7a7dd
updated readme
julian-fong 1abcdac
fixed code formatting on modeling.py
julian-fong e896c52
Merge branch 'adapter-hub:main' into adapterplus
julian-fong File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
# Module to support adapter training for vision related tasks | ||
calpt marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
import torch.nn as nn | ||
|
||
|
||
class StochasticDepth(nn.Module): | ||
""" | ||
Applies Stochastic Depth (aka Drop Path) to residual networks. | ||
Constructed loosely upon the implementations in the `torchvision` library | ||
and the `timm` library. | ||
|
||
Randomly drops samples post-layer inside a batch via a `drop_prob` probability | ||
and scales them by `1-drop_prob` if the layer is kept if keep_prob_scaling is True. | ||
|
||
Paper: https://arxiv.org/pdf/1603.09382 | ||
References: https://pytorch.org/vision/main/_modules/torchvision/ops/stochastic_depth.html#stochastic_depth | ||
|
||
""" | ||
|
||
def __init__(self, drop_prob: float = 0.0, keep_prob_scaling: bool = True): | ||
super().__init__() | ||
self.drop_prob = drop_prob | ||
self.keep_prob_scaling = keep_prob_scaling | ||
|
||
def forward(self, x): | ||
return stochastic_depth( | ||
x, self.drop_prob, self.keep_prob_scaling, self.training | ||
) | ||
|
||
|
||
def stochastic_depth( | ||
x, drop_prob: float = 0.0, keep_prob_scaling: bool = True, training: bool = False | ||
): | ||
""" | ||
Applies stochastic_depth to a batch. | ||
|
||
Args: | ||
x: torch.Tensor of size (batch_size, ...) | ||
A residual block | ||
drop_prob: float between 0.0 <= drop_prob <= 1.0 | ||
The probability of dropping the sample inside the batch | ||
keep_prob_scaling: bool, optional | ||
Boolean parameter to specify whether to scale samples by keep_prob if | ||
they are kept | ||
training: bool, optional | ||
Boolean parameter to specify whether or not the model is in training | ||
or inference mode. Stochastic Depth is not applied during inference | ||
similar to Dropout. | ||
""" | ||
if drop_prob >= 1.0 or drop_prob < 0.0: | ||
raise ValueError("drop_prob must be between 0.0 and 1.0") | ||
|
||
if drop_prob == 0.0 or not training: | ||
return x | ||
|
||
keep_prob = 1.0 - drop_prob | ||
# get the number of samples in the batch i.e input.shape[0] | ||
sample_shape = [x.shape[0]] + [1] * (x.ndim - 1) | ||
|
||
bernoulli_tensor = x.new_empty( | ||
sample_shape, dtype=x.dtype, device=x.device | ||
).bernoulli_(keep_prob) | ||
if keep_prob_scaling: | ||
bernoulli_tensor.div_(keep_prob) | ||
|
||
return x * bernoulli_tensor |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imports are added twice in this file, once here and once in the type checking block below