Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DINOv2 with registers #35348

Merged
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
fc8324a
added changes from 32905
BernardZach Dec 6, 2024
0ed1114
fixed mistakes caused by select all paste
BernardZach Dec 6, 2024
64fd5e1
Merge branch 'main' of https://github.com/huggingface/transformers in…
BernardZach Dec 9, 2024
cbfa985
rename diff_dinov2...
BernardZach Dec 9, 2024
125197b
ran tests
BernardZach Dec 12, 2024
5a9256e
Merge pull request #1 from innovationcore/zach/Dino-v2-with-registers
BernardZach Dec 12, 2024
39a573a
Fix modular
NielsRogge Dec 19, 2024
f6338f2
Fix tests
NielsRogge Dec 19, 2024
8f327b6
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 19, 2024
b93fc8f
Use new init
NielsRogge Dec 20, 2024
87263fa
Simplify drop path
NielsRogge Dec 20, 2024
b9eccbf
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 20, 2024
d85a9c6
Convert all checkpoints
NielsRogge Dec 22, 2024
2c072b4
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 22, 2024
aac007b
Add figure and summary
NielsRogge Dec 23, 2024
e5c4dd2
Merge branch 'main' into add_dinov_2_registers_innovationcore
NielsRogge Dec 23, 2024
13b3235
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 23, 2024
8b13023
Update paths
NielsRogge Dec 24, 2024
7ea3747
Merge remote-tracking branch 'upstream/main' into add_dinov_2_registe…
NielsRogge Dec 24, 2024
19af3f2
Update docs
NielsRogge Dec 24, 2024
8a129f3
Update docs
NielsRogge Dec 24, 2024
e72e7f8
Update toctree
NielsRogge Dec 24, 2024
b0dd519
Update docs
NielsRogge Dec 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -653,6 +653,8 @@
title: DiNAT
- local: model_doc/dinov2
title: DINOV2
- local: model_doc/dinov2_with_registers
title: Dinov2WithRegisters
- local: model_doc/dit
title: DiT
- local: model_doc/dpt
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ Flax), PyTorch, and/or TensorFlow.
| [DialoGPT](model_doc/dialogpt) | ✅ | ✅ | ✅ |
| [DiNAT](model_doc/dinat) | ✅ | ❌ | ❌ |
| [DINOv2](model_doc/dinov2) | ✅ | ❌ | ✅ |
| [Dinov2WithRegisters](model_doc/dinov2_with_registers) | ✅ | ❌ | ❌ |
| [DistilBERT](model_doc/distilbert) | ✅ | ✅ | ✅ |
| [DiT](model_doc/dit) | ✅ | ❌ | ✅ |
| [DonutSwin](model_doc/donut) | ✅ | ❌ | ❌ |
Expand Down
54 changes: 54 additions & 0 deletions docs/source/en/model_doc/dinov2_with_registers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Dinov2WithRegisters

## Overview

The Dinov2 With Registers model was proposed in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) by Timothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski.

The Vision Transformer (ViT) is a transformer encoder model (BERT-like) [originally introduced](https://arxiv.org/abs/2010.11929) to do supervised image classification on ImageNet.

Next, people figured out ways to make ViT work really well on self-supervised image feature extraction (i.e. learning meaningful features, also called embeddings) on images without requiring any labels. Some example papers here include [DINOv2](https://huggingface.co/papers/2304.07193) and [MAE](https://arxiv.org/abs/2111.06377).

The authors of DINOv2 noticed that ViTs have artifacts in attention maps. It’s due to the model using some image patches as “registers”. The authors propose a fix: just add some new tokens (called "register" tokens), which you only use during pre-training (and throw away afterwards). This results in:
- no artifacts
- interpretable attention maps
- and improved performances.

The abstract from the paper is the following:

*Transformers have recently emerged as a powerful tool for learning visual representations. In this paper, we identify and characterize artifacts in feature maps of both supervised and self-supervised ViT networks. The artifacts correspond to high-norm tokens appearing during inference primarily in low-informative background areas of images, that are repurposed for internal computations. We propose a simple yet effective solution based on providing additional tokens to the input sequence of the Vision Transformer to fill that role. We show that this solution fixes that problem entirely for both supervised and self-supervised models, sets a new state of the art for self-supervised visual models on dense visual prediction tasks, enables object discovery methods with larger models, and most importantly leads to smoother feature maps and attention maps for downstream visual processing.*

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/dinov2_with_registers_visualization.png"
alt="drawing" width="600"/>

<small> Visualization of attention maps of various models trained with vs. without registers. Taken from the <a href="https://arxiv.org/abs/2309.16588">original paper</a>. </small>

Tips:

- Usage of Dinov2 with registers is identical to Dinov2 without, you'll just get better performance.

This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original code can be found [here](https://github.com/facebookresearch/dinov2).


## Dinov2WithRegistersConfig

[[autodoc]] Dinov2WithRegistersConfig

## Dinov2WithRegistersModel

[[autodoc]] Dinov2WithRegistersModel
- forward

## Dinov2WithRegistersForImageClassification

[[autodoc]] Dinov2WithRegistersForImageClassification
- forward
1 change: 1 addition & 0 deletions docs/source/en/perf_infer_gpu_one.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@ For now, Transformers supports SDPA inference and training for the following arc
* [Dbrx](https://huggingface.co/docs/transformers/model_doc/dbrx#transformers.DbrxModel)
* [DeiT](https://huggingface.co/docs/transformers/model_doc/deit#transformers.DeiTModel)
* [Dinov2](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [Dinov2_with_registers](https://huggingface.co/docs/transformers/en/model_doc/dinov2)
* [DistilBert](https://huggingface.co/docs/transformers/model_doc/distilbert#transformers.DistilBertModel)
* [Dpr](https://huggingface.co/docs/transformers/model_doc/dpr#transformers.DprReader)
* [EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder_decoder#transformers.EncoderDecoderModel)
Expand Down
16 changes: 16 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,7 @@
"models.dialogpt": [],
"models.dinat": ["DinatConfig"],
"models.dinov2": ["Dinov2Config"],
"models.dinov2_with_registers": ["Dinov2WithRegistersConfig"],
"models.distilbert": [
"DistilBertConfig",
"DistilBertTokenizer",
Expand Down Expand Up @@ -2159,6 +2160,14 @@
"Dinov2PreTrainedModel",
]
)
_import_structure["models.dinov2_with_registers"].extend(
[
"Dinov2WithRegistersBackbone",
"Dinov2WithRegistersForImageClassification",
"Dinov2WithRegistersModel",
"Dinov2WithRegistersPreTrainedModel",
]
)
_import_structure["models.distilbert"].extend(
[
"DistilBertForMaskedLM",
Expand Down Expand Up @@ -5361,6 +5370,7 @@
from .models.detr import DetrConfig
from .models.dinat import DinatConfig
from .models.dinov2 import Dinov2Config
from .models.dinov2_with_registers import Dinov2WithRegistersConfig
from .models.distilbert import (
DistilBertConfig,
DistilBertTokenizer,
Expand Down Expand Up @@ -7017,6 +7027,12 @@
Dinov2Model,
Dinov2PreTrainedModel,
)
from .models.dinov2_with_registers import (
Dinov2WithRegistersBackbone,
Dinov2WithRegistersForImageClassification,
Dinov2WithRegistersModel,
Dinov2WithRegistersPreTrainedModel,
)
from .models.distilbert import (
DistilBertForMaskedLM,
DistilBertForMultipleChoice,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@
dialogpt,
dinat,
dinov2,
dinov2_with_registers,
distilbert,
dit,
donut,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@
("detr", "DetrConfig"),
("dinat", "DinatConfig"),
("dinov2", "Dinov2Config"),
("dinov2_with_registers", "Dinov2WithRegistersConfig"),
("distilbert", "DistilBertConfig"),
("donut-swin", "DonutSwinConfig"),
("dpr", "DPRConfig"),
Expand Down Expand Up @@ -404,6 +405,7 @@
("dialogpt", "DialoGPT"),
("dinat", "DiNAT"),
("dinov2", "DINOv2"),
("dinov2_with_registers", "Dinov2WithRegisters"),
("distilbert", "DistilBERT"),
("dit", "DiT"),
("donut-swin", "DonutSwin"),
Expand Down
4 changes: 4 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@
("detr", "DetrModel"),
("dinat", "DinatModel"),
("dinov2", "Dinov2Model"),
("dinov2_with_registers", "Dinov2WithRegistersModel"),
("distilbert", "DistilBertModel"),
("donut-swin", "DonutSwinModel"),
("dpr", "DPRQuestionEncoder"),
Expand Down Expand Up @@ -584,6 +585,7 @@
("detr", "DetrModel"),
("dinat", "DinatModel"),
("dinov2", "Dinov2Model"),
("dinov2_with_registers", "Dinov2WithRegistersModel"),
("dpt", "DPTModel"),
("efficientformer", "EfficientFormerModel"),
("efficientnet", "EfficientNetModel"),
Expand Down Expand Up @@ -659,6 +661,7 @@
),
("dinat", "DinatForImageClassification"),
("dinov2", "Dinov2ForImageClassification"),
("dinov2_with_registers", "Dinov2WithRegistersForImageClassification"),
(
"efficientformer",
(
Expand Down Expand Up @@ -1373,6 +1376,7 @@
("convnextv2", "ConvNextV2Backbone"),
("dinat", "DinatBackbone"),
("dinov2", "Dinov2Backbone"),
("dinov2_with_registers", "Dinov2WithRegistersBackbone"),
("focalnet", "FocalNetBackbone"),
("hiera", "HieraBackbone"),
("maskformer-swin", "MaskFormerSwinBackbone"),
Expand Down
2 changes: 0 additions & 2 deletions src/transformers/models/dinov2/modeling_dinov2.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,7 +351,6 @@ def forward(self, hidden_state: torch.Tensor) -> torch.Tensor:
def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor:
"""
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks,
however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the
Expand All @@ -368,7 +367,6 @@ def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = Fals
return output


# Copied from transformers.models.beit.modeling_beit.BeitDropPath
class Dinov2DropPath(nn.Module):
"""Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""

Expand Down
27 changes: 27 additions & 0 deletions src/transformers/models/dinov2_with_registers/__init__.py
NielsRogge marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING

from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure


if TYPE_CHECKING:
from .configuration_dinov2_with_registers import *
from .modeling_dinov2_with_registers import *
else:
import sys

_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
Loading