Skip to content

Commit

Permalink
Bump timm from 1.0.11 to 1.0.12 (#1328)
Browse files Browse the repository at this point in the history
Bumps [timm](https://github.com/huggingface/pytorch-image-models) from
1.0.11 to 1.0.12.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/pytorch-image-models/releases">timm's
releases</a>.</em></p>
<blockquote>
<h2>Release v1.0.12</h2>
<h2>Nov 28, 2024</h2>
<ul>
<li>More optimizers
<ul>
<li>Add MARS optimizer (<a
href="https://arxiv.org/abs/2411.10438">https://arxiv.org/abs/2411.10438</a>,
<a
href="https://github.com/AGI-Arena/MARS">https://github.com/AGI-Arena/MARS</a>)</li>
<li>Add LaProp optimizer (<a
href="https://arxiv.org/abs/2002.04839">https://arxiv.org/abs/2002.04839</a>,
<a
href="https://github.com/Z-T-WANG/LaProp-Optimizer">https://github.com/Z-T-WANG/LaProp-Optimizer</a>)</li>
<li>Add masking from 'Cautious Optimizers' (<a
href="https://arxiv.org/abs/2411.16085">https://arxiv.org/abs/2411.16085</a>,
<a
href="https://github.com/kyleliang919/C-Optim">https://github.com/kyleliang919/C-Optim</a>)
to Adafactor, Adafactor Big Vision, AdamW (legacy), Adopt, Lamb, LaProp,
Lion, NadamW, RMSPropTF, SGDW</li>
<li>Cleanup some docstrings and type annotations re optimizers and
factory</li>
</ul>
</li>
<li>Add MobileNet-V4 Conv Medium models pretrained on in12k and
fine-tuned in1k @ 384x384
<ul>
<li><a
href="https://huggingface.co/timm/mobilenetv4_conv_medium.e250_r384_in12k_ft_in1k">https://huggingface.co/timm/mobilenetv4_conv_medium.e250_r384_in12k_ft_in1k</a></li>
<li><a
href="https://huggingface.co/timm/mobilenetv4_conv_medium.e250_r384_in12k">https://huggingface.co/timm/mobilenetv4_conv_medium.e250_r384_in12k</a></li>
<li><a
href="https://huggingface.co/timm/mobilenetv4_conv_medium.e180_ad_r384_in12k">https://huggingface.co/timm/mobilenetv4_conv_medium.e180_ad_r384_in12k</a></li>
<li><a
href="https://huggingface.co/timm/mobilenetv4_conv_medium.e180_r384_in12k">https://huggingface.co/timm/mobilenetv4_conv_medium.e180_r384_in12k</a></li>
</ul>
</li>
<li>Add small cs3darknet, quite good for the speed
<ul>
<li><a
href="https://huggingface.co/timm/cs3darknet_focus_s.ra4_e3600_r256_in1k">https://huggingface.co/timm/cs3darknet_focus_s.ra4_e3600_r256_in1k</a></li>
</ul>
</li>
</ul>
<h2>Nov 12, 2024</h2>
<ul>
<li>Optimizer factory refactor
<ul>
<li>New factory works by registering optimizers using an OptimInfo
dataclass w/ some key traits</li>
<li>Add <code>list_optimizers</code>, <code>get_optimizer_class</code>,
<code>get_optimizer_info</code> to reworked
<code>create_optimizer_v2</code> fn to explore optimizers, get info or
class</li>
<li>deprecate <code>optim.optim_factory</code>, move fns to
<code>optim/_optim_factory.py</code> and
<code>optim/_param_groups.py</code> and encourage import via
<code>timm.optim</code></li>
</ul>
</li>
<li>Add Adopt (<a
href="https://github.com/iShohei220/adopt">https://github.com/iShohei220/adopt</a>)
optimizer</li>
<li>Add 'Big Vision' variant of Adafactor (<a
href="https://github.com/google-research/big_vision/blob/main/big_vision/optax.py">https://github.com/google-research/big_vision/blob/main/big_vision/optax.py</a>)
optimizer</li>
<li>Fix original Adafactor to pick better factorization dims for
convolutions</li>
<li>Tweak LAMB optimizer with some improvements in torch.where
functionality since original, refactor clipping a bit</li>
<li>dynamic img size support in vit, deit, eva improved to support
resize from non-square patch grids, thanks <a
href="https://github.com/wojtke">https://github.com/wojtke</a></li>
</ul>
<h2>Oct 31, 2024</h2>
<p>Add a set of new very well trained ResNet &amp; ResNet-V2 18/34
(basic block) weights. See <a
href="https://huggingface.co/blog/rwightman/resnet-trick-or-treat">https://huggingface.co/blog/rwightman/resnet-trick-or-treat</a></p>
<h2>Oct 19, 2024</h2>
<ul>
<li>Cleanup torch amp usage to avoid cuda specific calls, merge support
for Ascend (NPU) devices from <a
href="https://github.com/MengqingCao">MengqingCao</a> that should work
now in PyTorch 2.5 w/ new device extension autoloading feature. Tested
Intel Arc (XPU) in Pytorch 2.5 too and it (mostly) worked.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>mambaout.py: fixed bug by <a
href="https://github.com/NightMachinery"><code>@​NightMachinery</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2305">huggingface/pytorch-image-models#2305</a></li>
<li>Cleanup some amp related behaviour to better support different
(non-cuda) devices by <a
href="https://github.com/rwightman"><code>@​rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2308">huggingface/pytorch-image-models#2308</a></li>
<li>Add NPU backend support for val and inference by <a
href="https://github.com/MengqingCao"><code>@​MengqingCao</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2109">huggingface/pytorch-image-models#2109</a></li>
<li>Update some clip pretrained weights to point to new hub locations by
<a href="https://github.com/rwightman"><code>@​rwightman</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2311">huggingface/pytorch-image-models#2311</a></li>
<li>ResNet vs MNV4 v1/v2 18 &amp; 34 weights by <a
href="https://github.com/rwightman"><code>@​rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2316">huggingface/pytorch-image-models#2316</a></li>
<li>Replace deprecated positional argument with --data-dir by <a
href="https://github.com/JosuaRieder"><code>@​JosuaRieder</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2322">huggingface/pytorch-image-models#2322</a></li>
<li>Fix typo in train.py: bathes &gt; batches by <a
href="https://github.com/JosuaRieder"><code>@​JosuaRieder</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2321">huggingface/pytorch-image-models#2321</a></li>
<li>Fix positional embedding resampling for non-square inputs in ViT by
<a href="https://github.com/wojtke"><code>@​wojtke</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2317">huggingface/pytorch-image-models#2317</a></li>
<li>Add trust_remote_code argument to ReaderHfds by <a
href="https://github.com/grodino"><code>@​grodino</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2326">huggingface/pytorch-image-models#2326</a></li>
<li>Extend train epoch schedule by warmup_epochs if warmup_prefix
enabled by <a
href="https://github.com/rwightman"><code>@​rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2325">huggingface/pytorch-image-models#2325</a></li>
<li>Extend existing unit tests using Cover-Agent by <a
href="https://github.com/mrT23"><code>@​mrT23</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2331">huggingface/pytorch-image-models#2331</a></li>
<li>An impl of adafactor as per big vision (scaling vit) changes by <a
href="https://github.com/rwightman"><code>@​rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2320">huggingface/pytorch-image-models#2320</a></li>
<li>Add py.typed file as recommended by PEP 561 by <a
href="https://github.com/antoinebrl"><code>@​antoinebrl</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2252">huggingface/pytorch-image-models#2252</a></li>
<li>Add CODE_OF_CONDUCT.md and CITATION.cff files by <a
href="https://github.com/AlinaImtiaz018"><code>@​AlinaImtiaz018</code></a>
in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2333">huggingface/pytorch-image-models#2333</a></li>
<li>Add some 384x384 small model weights by <a
href="https://github.com/rwightman"><code>@​rwightman</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2334">huggingface/pytorch-image-models#2334</a></li>
<li>In dist training, update loss running avg every step, sync on log by
<a href="https://github.com/rwightman"><code>@​rwightman</code></a> in
<a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2340">huggingface/pytorch-image-models#2340</a></li>
<li>Improve WandB logging by <a
href="https://github.com/sinahmr"><code>@​sinahmr</code></a> in <a
href="https://redirect.github.com/huggingface/pytorch-image-models/pull/2341">huggingface/pytorch-image-models#2341</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/553ded5c6b9f2cd1ce6220baf6561ff526e8ff12"><code>553ded5</code></a>
Version 1.0.12</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/464885e13553bc8b74cf255c20c96624b05d8222"><code>464885e</code></a>
See if we can avoid some model / layer pickle issues with the aa attr in
Conv...</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/5fe5f9d48880fa1ec4bd28e1dade332b6fba0988"><code>5fe5f9d</code></a>
Add a different mnv4 conv-small weight</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/303f7691a168febb3c5a142d94f7efd6eb4ce422"><code>303f769</code></a>
Add cautious mars, improve test reliability by skipping grad diff for
first step</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/82e867769026432dc2ce0082a435679f2abe0d66"><code>82e8677</code></a>
Make LaProp weight decay match typical PyTorch 'decoupled' behaviour
where it...</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/886eb77938112acaf7e5df0c69cdad26f161c403"><code>886eb77</code></a>
Update README, missed small discrep in adafactor min dim update</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/e3e434bbc4c861c11984fcefd6812bbc3bfd38de"><code>e3e434b</code></a>
To be technically correct, need to check the in-place _ ver of op</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/7c32d3bd829ab74f71edebb6a793760df685f119"><code>7c32d3b</code></a>
Work around _foreach_maximum issue, need scalar other support</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/7cf683628fa93cef2ed6dd2f0dc7e6b17f689e4a"><code>7cf6836</code></a>
Cautious optimizer impl plus some typing cleanup.</li>
<li><a
href="https://github.com/huggingface/pytorch-image-models/commit/aeb1ed7a15594505c1585697c1cd90cb49e7a115"><code>aeb1ed7</code></a>
Keep basic optim test LR range closer to before w/ updated code</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/pytorch-image-models/compare/v1.0.11...v1.0.12">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=timm&package-manager=pip&previous-version=1.0.11&new-version=1.0.12)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  • Loading branch information
dependabot[bot] authored Dec 6, 2024
1 parent b74b0b5 commit b31b06c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion samples/export-requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ numpy<2.0.0; sys_platform == 'darwin'
einops==0.8.0 # For Qwen
transformers_stream_generator==0.0.5 # For Qwen
diffusers==0.31.0 # For image generation pipelines
timm==1.0.11 # For exporting InternVL2
timm==1.0.12 # For exporting InternVL2
torchvision # For visual language models
transformers>=4.43 # For Whisper

0 comments on commit b31b06c

Please sign in to comment.