Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update timm requirement from <0.6,>=0.5.4 to >=0.5.4,<0.7 #4

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Aug 29, 2022

Updates the requirements on timm to permit the latest version.

Release notes

Sourced from timm's releases.

v0.6.7 Release

Minor bug fixes and a few more weights since 0.6.5

  • A few more weights & model defs added:
    • darknetaa53 - 79.8 @ 256, 80.5 @ 288
    • convnext_nano - 80.8 @ 224, 81.5 @ 288
    • cs3sedarknet_l - 81.2 @ 256, 81.8 @ 288
    • cs3darknet_x - 81.8 @ 256, 82.2 @ 288
    • cs3sedarknet_x - 82.2 @ 256, 82.7 @ 288
    • cs3edgenet_x - 82.2 @ 256, 82.7 @ 288
    • cs3se_edgenet_x - 82.8 @ 256, 83.5 @ 320
  • cs3* weights above all trained on TPU w/ bits_and_tpu branch. Thanks to TRC program!
  • Add output_stride=8 and 16 support to ConvNeXt (dilation)
  • deit3 models not being able to resize pos_emb fixed
Changelog

Sourced from timm's changelog.

  • Version 0.6.7 PyPi release (/w above bug fixes and new weighs since 0.6.5)

July 8, 2022

More models, more fixes

  • Official research models (w/ weights) added:
  • My own models:
    • Small ResNet defs added by request with 1 block repeats for both basic and bottleneck (resnet10 and resnet14)
    • CspNet refactored with dataclass config, simplified CrossStage3 (cs3) option. These are closer to YOLO-v5+ backbone defs.
    • More relative position vit fiddling. Two srelpos (shared relative position) models trained, and a medium w/ class token.
    • Add an alternate downsample mode to EdgeNeXt and train a small model. Better than original small, but not their new USI trained weights.
  • My own model weight results (all ImageNet-1k training)
    • resnet10t - 66.5 @ 176, 68.3 @ 224
    • resnet14t - 71.3 @ 176, 72.3 @ 224
    • resnetaa50 - 80.6 @ 224 , 81.6 @ 288
    • darknet53 - 80.0 @ 256, 80.5 @ 288
    • cs3darknet_m - 77.0 @ 256, 77.6 @ 288
    • cs3darknet_focus_m - 76.7 @ 256, 77.3 @ 288
    • cs3darknet_l - 80.4 @ 256, 80.9 @ 288
    • cs3darknet_focus_l - 80.3 @ 256, 80.9 @ 288
    • vit_srelpos_small_patch16_224 - 81.1 @ 224, 82.1 @ 320
    • vit_srelpos_medium_patch16_224 - 82.3 @ 224, 83.1 @ 320
    • vit_relpos_small_patch16_cls_224 - 82.6 @ 224, 83.6 @ 320
    • edgnext_small_rw - 79.6 @ 224, 80.4 @ 320
  • cs3, darknet, and vit_*relpos weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs.
  • Hugging Face Hub support fixes verified, demo notebook TBA
  • Pretrained weights / configs can be loaded externally (ie from local disk) w/ support for head adaptation.
  • Add support to change image extensions scanned by timm datasets/parsers. See (rwightman/pytorch-image-models#1274)
  • Default ConvNeXt LayerNorm impl to use F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2) via LayerNorm2d in all cases.
    • a bit slower than previous custom impl on some hardware (ie Ampere w/ CL), but overall fewer regressions across wider HW / PyTorch version ranges.
    • previous impl exists as LayerNormExp2d in models/layers/norm.py
  • Numerous bug fixes
  • Currently testing for imminent PyPi 0.6.x release
  • LeViT pretraining of larger models still a WIP, they don't train well / easily without distillation. Time to add distill support (finally)?
  • ImageNet-22k weight training + finetune ongoing, work on multi-weight support (slowly) chugging along (there are a LOT of weights, sigh) ...

May 13, 2022

  • Official Swin-V2 models and weights added from (https://github.com/microsoft/Swin-Transformer). Cleaned up to support torchscript.
  • Some refactoring for existing timm Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects.
  • More Vision Transformer relative position / residual post-norm experiments (all trained on TPU thanks to TRC program)
    • vit_relpos_small_patch16_224 - 81.5 @ 224, 82.5 @ 320 -- rel pos, layer scale, no class token, avg pool
    • vit_relpos_medium_patch16_rpn_224 - 82.3 @ 224, 83.1 @ 320 -- rel pos + res-post-norm, no class token, avg pool
    • vit_relpos_medium_patch16_224 - 82.5 @ 224, 83.3 @ 320 -- rel pos, layer scale, no class token, avg pool
    • vit_relpos_base_patch16_gapcls_224 - 82.8 @ 224, 83.9 @ 320 -- rel pos, layer scale, class token, avg pool (by mistake)
  • Bring 512 dim, 8-head 'medium' ViT model variant back to life (after using in a pre DeiT 'small' model for first ViT impl back in 2020)
  • Add ViT relative position support for switching btw existing impl and some additions in official Swin-V2 impl for future trials
  • Sequencer2D impl (https://arxiv.org/abs/2205.01972), added via PR from author (https://github.com/okojoalg)

... (truncated)

Commits
  • 7cd4204 Add TPU TRC acknowledge
  • 7d44d65 Update README and changelogs
  • d875a1d version 0.6.7
  • c865028 Update benchmark with latest model adds
  • 30bd174 Improve csv table result processing for better sort when updating
  • e987e29 Add convnext_nano and few cs3 models to existing results tables
  • 6f103a4 Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
  • 4042a94 Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
  • c8f69e0 Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
  • 99af63c Merge pull request #1277 from lukasugar/patch-1
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Updates the requirements on [timm](https://github.com/rwightman/pytorch-image-models) to permit the latest version.
- [Release notes](https://github.com/rwightman/pytorch-image-models/releases)
- [Changelog](https://github.com/rwightman/pytorch-image-models/blob/master/docs/changes.md)
- [Commits](huggingface/pytorch-image-models@v0.5.4...v0.6.7)

---
updated-dependencies:
- dependency-name: timm
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Aug 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants