Skip to content

Commit

Permalink
add MobileOne backbone
Browse files Browse the repository at this point in the history
  • Loading branch information
Your Name committed Jun 27, 2022
1 parent 0da5a44 commit 63ad3e2
Show file tree
Hide file tree
Showing 5 changed files with 416 additions and 97 deletions.
150 changes: 72 additions & 78 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# NB


> Please install `nb` with `pip install nbnb`, the `nb` name is be taken already so we using `nbnb` instead!

Nenural network Blocks (aka: **NB**, or neural network builder). This library provides massive fancy blocks for you for quick import to build your powerful. Some SOTA tricks and connections such as CSP, ASFF, Attention, BaseConv, Hardswish, Mish all included for quick prototype your model. This is an **Arsenal for deeplearning forge**.

**nb** is an idea comes from engineering, we build model with some common blocks, we exploring new ideas with SOTA tricks, but all those thing can be gathered into one single place, and for model quick design and prototyping.
Expand All @@ -8,6 +12,74 @@ this project is under construct for now, I will update it quickly once I found s



## Updates

- **2022.06.27**: Now we add `MobileOne` backbone support!
- **2021.03.16**: Added some blocks used inside Scaled-YoloV4 (P5,P6,P7). List are:
- `HarDBlock`;
- `SPPCSP`;
- `VoVCSP`;
You can using these blocks to stack your model now.

```python
from nb.torch.blocks.csp_blocks import VoVCSP
```
- **2021.01.22**: Adding Mish activation function. You can call it in your model using the following code:

```python
from nb.torch.base import build_activation_layer
act = build_activation_layer(act_cfg=dict(type='Mish'))
```

- **2021.01.22**: Adding Triplet Attention Mechanism. You can plug it in any of your conv net blocks using the following code:

```python
from nb.torch.blocks.attention_blocks import TripletAttention
att_mechanism = TripletAttention()
rand_tensor = torch.rand(1,3,32,32)
output = att_mechanism(rand_tensor)
```
TripletAttention is a shape preserving tensor which expects a 4-dimensional input (B,C,H,W) and outputs a 4-dimensional output of the same shape (B,C,H,W).

- **2021.01.14**: Adding SiLU introduced from pytorch 1.7. And now you can build a activation layer by using:

```python
from nb.torch.base import build_activation_layer
act = build_activation_layer(act_cfg=dict(type='SiLU'))
```

Also PANet module also provided now. BiFPN is on the way. We will also provide more examples on how to using it!

- **2020.09.28**: ASFF module added inside **nb**. We have a ASFF design version of YoloV5 now! Some experiment will add here once we confirm ASFF module enhance the model performance.

- **2020.09.22**: New backbone of `Ghostnet` and `MobilenetV3` included. Both of them can be used to replace any of your application's backbone.

- **2020.09.14**: We release a primary version of 0.04, which you can build a simple YoloV5 with **nb** easily!

```shell
pip install nbnb
```

- **2020.09.12**: New backbone SpineNet added:

SpineNet is a backbone model specific for detection, it's a backbone but can do FPN's thing!! More info pls reference google's paper [link](https://ai.googleblog.com/2020/06/spinenet-novel-architecture-for-object.html).

```python
from nb.torch.bakbones.spinenet import SpineNet

model = SpineNet()
```

- **2020.09.11**: New added blocks:

```
resnet.Bottleneck
resnet.BasicBlock
ConvBase
```


## Install

**nb** can be installed from PIP, remember the name is `nbnb`:
Expand Down Expand Up @@ -134,84 +206,6 @@ from nb.torch.backbones.mobilenetv3_new import MobilenetV3_Small
```





## Updates

- **2021.03.16** Added some blocks used inside Scaled-YoloV4 (P5,P6,P7). List are:

- `HarDBlock`;
- `SPPCSP`;
- `VoVCSP`;

You can using these blocks to stack your model now.

```python
from nb.torch.blocks.csp_blocks import VoVCSP
```



- **2021.01.22**: Adding Mish activation function. You can call it in your model using the following code:

```python
from nb.torch.base import build_activation_layer
act = build_activation_layer(act_cfg=dict(type='Mish'))
```

- **2021.01.22**: Adding Triplet Attention Mechanism. You can plug it in any of your conv net blocks using the following code:

```python
from nb.torch.blocks.attention_blocks import TripletAttention
att_mechanism = TripletAttention()
rand_tensor = torch.rand(1,3,32,32)
output = att_mechanism(rand_tensor)
```
TripletAttention is a shape preserving tensor which expects a 4-dimensional input (B,C,H,W) and outputs a 4-dimensional output of the same shape (B,C,H,W).

- **2021.01.14**: Adding SiLU introduced from pytorch 1.7. And now you can build a activation layer by using:

```python
from nb.torch.base import build_activation_layer
act = build_activation_layer(act_cfg=dict(type='SiLU'))
```

Also PANet module also provided now. BiFPN is on the way. We will also provide more examples on how to using it!

- **2020.09.28**: ASFF module added inside **nb**. We have a ASFF design version of YoloV5 now! Some experiment will add here once we confirm ASFF module enhance the model performance.

- **2020.09.22**: New backbone of `Ghostnet` and `MobilenetV3` included. Both of them can be used to replace any of your application's backbone.

- **2020.09.14**: We release a primary version of 0.04, which you can build a simple YoloV5 with **nb** easily!

```shell
pip install nbnb
```

- **2020.09.12**: New backbone SpineNet added:

SpineNet is a backbone model specific for detection, it's a backbone but can do FPN's thing!! More info pls reference google's paper [link](https://ai.googleblog.com/2020/06/spinenet-novel-architecture-for-object.html).

```python
from nb.torch.bakbones.spinenet import SpineNet

model = SpineNet()
```

- **2020.09.11**: New added blocks:

```
resnet.Bottleneck
resnet.BasicBlock
ConvBase
```





## Support Matrix

We list all `conv` and `block` support in **nb** here:
Expand Down
12 changes: 12 additions & 0 deletions examples/mobileone.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from nb.torch.backbones.mobileone import MobileOne
import torch


a = MobileOne(deploy_mode=True)

x = torch.randn(2, 3, 224, 224)
print(a)

o = a(x)

print(o.shape)
4 changes: 2 additions & 2 deletions nb/torch/backbones/layers/basic_blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import torch.nn as nn
from .batchnorm import (
FrozenBatchNorm2d,
GroupNorm,
NaiveSyncBatchNorm,
NaiveSyncBatchNorm1d,
NaiveSyncBatchNorm3d,
Expand All @@ -15,6 +14,7 @@
from .blur_pool import BlurPool2d as BlurPool
from ...utils import helper as hp
from alfred import logger
from ...utils import iter_utils as iu

# needed for SE module with fx tracing
torch.fx.wrap("len")
Expand Down Expand Up @@ -300,7 +300,7 @@ def _create_bn(bn_class):
# any dimension
"sync_bn_torch": lambda: _create_bn(nn.SyncBatchNorm),
# others
"gn": lambda: GroupNorm(num_channels=num_channels, **kwargs),
"gn": lambda: nn.GroupNorm(num_channels=num_channels, **kwargs),
"instance": lambda: nn.InstanceNorm2d(num_channels, **kwargs),
"frozen_bn": lambda: FrozenBatchNorm2d(num_channels, **kwargs),
}
Expand Down
75 changes: 58 additions & 17 deletions nb/torch/backbones/mobileone.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
"""
In a [op, c, s, n]
"MobileOne-S0-Deploy": {
"input_size": 224,
"basic_args": BASIC_ARGS,
Expand Down Expand Up @@ -85,27 +86,67 @@
from torch.nn import Module
from torch import nn

from nb.torch.backbones.layers.mobileone_block import MobileOneBlock


class MobileOne(Module):
def __init__(self,
width_mult=1.0,
depth_mult=1.0,
dropout_rate=0.2,
num_classes=1000,
features_indices=[1, 4, 10, 15],
bn_mom=0.99,
bn_eps=1e-3
):
def __init__(
self,
num_classes=1000,
deploy_mode=False,
for_classification=True,
):
super(MobileOne, self).__init__()
self.num_classes = num_classes
self.extract_features = num_classes <= 0
# stride=2: ----> block 1 ,3, 5 ,11
self.return_features_indices = features_indices
out_feature_channels = []
out_feature_strides = [4, 8, 16, 32]

self.for_classification = for_classification

cfg_s1 = [
[("mobileone", 96, 2, 1, {"over_param_branches": 1})],
[("mobileone", 96, 2, 2, {"over_param_branches": 1})],
[("mobileone", 192, 2, 8, {"over_param_branches": 1})],
[("mobileone", 512, 2, 5, {"over_param_branches": 1})],
[("mobileone", 512, 1, 5, {"over_param_branches": 1})],
[("mobileone", 1280, 2, 1, {"over_param_branches": 1})],
[
("adaptive_avg_pool", 1280, 1, 1, {"output_size": 1}),
("conv_k1", 1280, 1, 1, {"bias": False}),
],
]

if not for_classification:
# discard last fc layer
cfg_s1 = cfg_s1[:-1]

in_channels = 3
_blocks = nn.ModuleList([])
num_block = 0
for l_cfg in cfg_s1:
_, c, s, n, _ = l_cfg[0]
out_channels = c
for i in range(n):
_blocks.append(
MobileOneBlock(
in_channels,
out_channels,
stride=s,
deploy=deploy_mode
)
)
in_channels = out_channels
num_block += 1
self._blocks = _blocks

if for_classification:
self.avg_pool = nn.AdaptiveAvgPool2d(output_size=1)
self.conv_k1 = nn.Conv2d(1280, 1280, 1)


def forward(self, x):
pass
for i, block in enumerate(self._blocks):
x = block(x)
if self.for_classification:
x = self.avg_pool(x)
x = self.conv_k1(x)

return x


Loading

0 comments on commit 63ad3e2

Please sign in to comment.