Skip to content

Latest commit

 

History

History
831 lines (400 loc) · 20.3 KB

model.md

File metadata and controls

831 lines (400 loc) · 20.3 KB

Attention

30+种 注意力机制

对应注意力的代码在👇👇👇
https://github.com/Him-wen/YOLOC/tree/main/docs/attention_model


Attention Series


1. External Attention

1.1. Paper

"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks"

1.2. Overview


2. Self Attention

2.1. Paper

"Attention Is All You Need"

1.2. Overview


3. Simplified Self Attention

3.1. Paper

None

3.2. Overview


4. Squeeze-and-Excitation Attention

4.1. Paper

"Squeeze-and-Excitation Networks"

4.2. Overview


5. SK Attention

5.1. Paper

"Selective Kernel Networks"

5.2. Overview


6. CBAM Attention

6.1. Paper

"CBAM: Convolutional Block Attention Module"

6.2. Overview


7. BAM Attention

7.1. Paper

"BAM: Bottleneck Attention Module"

7.2. Overview


8. ECA Attention

8.1. Paper

"ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks"

8.2. Overview


9. DANet Attention

9.1. Paper

"Dual Attention Network for Scene Segmentation"

9.2. Overview


10. Pyramid Split Attention

10.1. Paper

"EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network"

10.2. Overview


11. Efficient Multi-Head Self-Attention

11.1. Paper

"ResT: An Efficient Transformer for Visual Recognition"

11.2. Overview


12. Shuffle Attention

12.1. Paper

"SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS"

12.2. Overview


13. MUSE Attention

13.1. Paper

"MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning"

13.2. Overview


14. SGE Attention

14.1. Paper

Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks

14.2. Overview


15. A2 Attention

15.1. Paper

A2-Nets: Double Attention Networks

15.2. Overview


16. AFT Attention

16.1. Paper

An Attention Free Transformer

16.2. Overview


17. Outlook Attention

17.1. Paper

VOLO: Vision Outlooker for Visual Recognition"

17.2. Overview


18. ViP Attention

18.1. Paper

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition"

18.2. Overview


19. CoAtNet Attention

19.1. Paper

CoAtNet: Marrying Convolution and Attention for All Data Sizes"

19.2. Overview

None


20. HaloNet Attention

20.1. Paper

Scaling Local Self-Attention for Parameter Efficient Visual Backbones"

20.2. Overview


21. Polarized Self-Attention

21.1. Paper

Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

21.2. Overview


22. CoTAttention

22.1. Paper

Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26

22.2. Overview


23. Residual Attention

23.1. Paper

Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021

23.2. Overview


24. S2 Attention

24.1. Paper

S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02

24.2. Overview


25. GFNet Attention

25.1. Paper

Global Filter Networks for Image Classification---arXiv 2021.07.01

25.2. Overview


26. TripletAttention

26.1. Paper

Rotate to Attend: Convolutional Triplet Attention Module---CVPR 2021

26.2. Overview


27. Coordinate Attention

27.1. Paper

Coordinate Attention for Efficient Mobile Network Design---CVPR 2021

27.2. Overview


28. MobileViT Attention

28.1. Paper

MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05

28.2. Overview


29. ParNet Attention

29.1. Paper

Non-deep Networks---ArXiv 2021.10.20

29.2. Overview


30. UFO Attention

30.1. Paper

UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29

30.2. Overview

31. MobileViTv2 Attention

31.1. Paper

Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06

31.2. Overview


Backbone Series

1. ResNet

1.1. Paper

"Deep Residual Learning for Image Recognition---CVPR2016 Best Paper"

1.2. Overview

2. ResNeXt

2.1. Paper

"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017"

2.2. Overview

3. MobileViT

3.1. Paper

MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05

4. ConvMixer

4.1. Paper

Patches Are All You Need?---ICLR2022 (Under Review)

4.2. Overview

MLP Series

1. RepMLP

1.1. Paper

"RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition"

1.2. Overview

2. MLP-Mixer

2.1. Paper

"MLP-Mixer: An all-MLP Architecture for Vision"

2.2. Overview


3. ResMLP

3.1. Paper

"ResMLP: Feedforward networks for image classification with data-efficient training"

3.2. Overview


4. gMLP

4.1. Paper

"Pay Attention to MLPs"

4.2. Overview


5. sMLP

5.1. Paper

"Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?"

5.2. Overview

Re-Parameter Series


1. RepVGG

1.1. Paper

"RepVGG: Making VGG-style ConvNets Great Again"

1.2. Overview


2. ACNet

2.1. Paper

"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks"

2.2. Overview


2. Diverse Branch Block

2.1. Paper

"Diverse Branch Block: Building a Convolution as an Inception-like Unit"

2.2. Overview

Convolution Series


1. Depthwise Separable Convolution

1.1. Paper

"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"

1.2. Overview


2. MBConv

2.1. Paper

"Efficientnet: Rethinking attention_model scaling for convolutional neural networks"

2.2. Overview


3. Involution

3.1. Paper

"Involution: Inverting the Inherence of Convolution for Visual Recognition"

3.2. Overview


4. DynamicConv

4.1. Paper

"Dynamic Convolution: Attention over Convolution Kernels"

4.2. Overview


5. CondConv

5.1. Paper

"CondConv: Conditionally Parameterized Convolutions for Efficient Inference"

5.2. Overview

🎓 Acknowledgement

Expand