-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MMPreTrain, Upgrade from Classification to Pre-Train #1474
Comments
Ezra-Yu
changed the title
[FEEDBACK] MMPreTrain
MMPreTrain, Upgrade from classification to pre-train
Apr 10, 2023
How can I use mmclassification instead of mmpretrain ? Maybe new repo would have been a good idea. |
Just use the |
tonysy
changed the title
MMPreTrain, Upgrade from classification to pre-train
MMPreTrain, Upgrade from Classification to Pre-Train
Apr 11, 2023
@XinyueZ in my envriment, it works well |
Yes, it is a typo. @MR-ei , We will fix it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Dear community,
We are excited to announce the release of a new and upgraded deep learning pre-trained models library, MMPreTrain. We have integrated the original MMClassification, image classification algorithm library, and MMSelfSup, self-supervised learning algorithm to launch the deep learning pre-training algorithm library MMPreTrain.
🤔 Compatibility with MMClassification
Fully compatible with MMClassification's directory structure, supported algorithms as well as usage. All the code and projects, which are based on the ordinary
mmcls
, can be migrated by simply changing the library name.For example:
from mmcls.models import ResNet
from mmpretrain.models import ResNet
from mmcls.datasets import ImageNet
from mmpretrain.datasets import ImageNet
python train configs/xxx_xx.py
python train configs/xxx_xx.py
mim train mmcls xxxx_xx.py
mim train mmpretrain xxxx_xx.py
For more details about migrating from
0.x
topretrain
, you can refer to the migration doc👍 Major Upgrades
With the release of
mmpretrain
, we have made several major upgrades to our library.1. Integrate Self-supervised Algorithms
we have integrated the self-supervised task, which enables users to easily get pre-trained models for various tasks. Users could find that in our directory
mmpretrain/models
, where a new folderselfsup
was made, which support 18 recent self-supervised learning algorithms.2. Provide convenient higher-level APIs
Secondly, we have provided a more convenient higher-level API, making it easier for users to interact with our library.
list_models
list_models
supports fuzzy matching, you can use * to match any character.get_model
get_model
can get the model from model namesImageClassificationInferencer
To use the
ImageClassificationInferencer
To inference multiple images by batch on CUDA
FeatureExtractor
Compared with
model.extract_feat
, it's used to extract features from the image files directly, instead of a batch of tensors.3. Based on the new training engine MMEngine
Based on MMEngine can support more aspects of upstream chip, training framework updates, and also more aspects of downstream calls to mmpretrain pre-trained models.
We have fully supported torch2.0, ensuring that our library is compatible with the latest version of PyTorch.
Add the following to your config. You can also refer to MMEngine DOC for help.
This is the speed boosting effect
To visualize the image classification result.
For more detail, You can refer to this PR.
↪ Feedbacks
We would like to invite the community to try it out and provide valuable feedback or suggestions. We are committed to improving our library and hope that you will join us on this journey.
The MMPreTrain team
The text was updated successfully, but these errors were encountered: