对应注意力的代码在👇👇👇
https://github.com/Him-wen/YOLOC/tree/main/docs/attention_model
-
Pytorch implementation of "Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks---arXiv 2021.05.05"
-
Pytorch implementation of "Attention Is All You Need---NIPS2017"
-
Pytorch implementation of "Squeeze-and-Excitation Networks---CVPR2018"
-
Pytorch implementation of "Selective Kernel Networks---CVPR2019"
-
Pytorch implementation of "CBAM: Convolutional Block Attention Module---ECCV2018"
-
Pytorch implementation of "BAM: Bottleneck Attention Module---BMCV2018"
-
Pytorch implementation of "ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks---CVPR2020"
-
Pytorch implementation of "Dual Attention Network for Scene Segmentation---CVPR2019"
-
Pytorch implementation of "EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network---arXiv 2021.05.30"
-
Pytorch implementation of "ResT: An Efficient Transformer for Visual Recognition---arXiv 2021.05.28"
-
Pytorch implementation of "SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS---ICASSP 2021"
-
Pytorch implementation of "MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning---arXiv 2019.11.17"
-
Pytorch implementation of "Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks---arXiv 2019.05.23"
-
Pytorch implementation of "A2-Nets: Double Attention Networks---NIPS2018"
-
Pytorch implementation of "An Attention Free Transformer---ICLR2021 (Apple New Work)"
-
Pytorch implementation of VOLO: Vision Outlooker for Visual Recognition---arXiv 2021.06.24" 【论文解析】
-
Pytorch implementation of Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition---arXiv 2021.06.23 【论文解析】
-
Pytorch implementation of CoAtNet: Marrying Convolution and Attention for All Data Sizes---arXiv 2021.06.09 【论文解析】
-
Pytorch implementation of Scaling Local Self-Attention for Parameter Efficient Visual Backbones---CVPR2021 Oral 【论文解析】
-
Pytorch implementation of Polarized Self-Attention: Towards High-quality Pixel-wise Regression---arXiv 2021.07.02 【论文解析】
-
Pytorch implementation of Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26 【论文解析】
-
Pytorch implementation of Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021
-
Pytorch implementation of S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02 【论文解析】
-
Pytorch implementation of Global Filter Networks for Image Classification---arXiv 2021.07.01
-
Pytorch implementation of Rotate to Attend: Convolutional Triplet Attention Module---WACV 2021
-
Pytorch implementation of Coordinate Attention for Efficient Mobile Network Design ---CVPR 2021
-
Pytorch implementation of MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05
-
Pytorch implementation of Non-deep Networks---ArXiv 2021.10.20
-
Pytorch implementation of UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29
-
Pytorch implementation of Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06
"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks"
"Squeeze-and-Excitation Networks"
"CBAM: Convolutional Block Attention Module"
"BAM: Bottleneck Attention Module"
"ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks"
"Dual Attention Network for Scene Segmentation"
"EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network"
"ResT: An Efficient Transformer for Visual Recognition"
"SA-NET: SHUFFLE ATTENTION FOR DEEP CONVOLUTIONAL NEURAL NETWORKS"
"MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning"
Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks
A2-Nets: Double Attention Networks
VOLO: Vision Outlooker for Visual Recognition"
Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition"
CoAtNet: Marrying Convolution and Attention for All Data Sizes"
None
Scaling Local Self-Attention for Parameter Efficient Visual Backbones"
Polarized Self-Attention: Towards High-quality Pixel-wise Regression"
Contextual Transformer Networks for Visual Recognition---arXiv 2021.07.26
Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021
S²-MLPv2: Improved Spatial-Shift MLP Architecture for Vision---arXiv 2021.08.02
Global Filter Networks for Image Classification---arXiv 2021.07.01
Rotate to Attend: Convolutional Triplet Attention Module---CVPR 2021
Coordinate Attention for Efficient Mobile Network Design---CVPR 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2021.10.05
Non-deep Networks---ArXiv 2021.10.20
UFO-ViT: High Performance Linear Vision Transformer without Softmax---ArXiv 2021.09.29
Separable Self-attention for Mobile Vision Transformers---ArXiv 2022.06.06
-
Pytorch implementation of "Deep Residual Learning for Image Recognition---CVPR2016 Best Paper"
-
Pytorch implementation of "Aggregated Residual Transformations for Deep Neural Networks---CVPR2017"
-
Pytorch implementation of MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05
-
Pytorch implementation of Patches Are All You Need?---ICLR2022 (Under Review)
"Deep Residual Learning for Image Recognition---CVPR2016 Best Paper"
"Aggregated Residual Transformations for Deep Neural Networks---CVPR2017"
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer---ArXiv 2020.10.05
Patches Are All You Need?---ICLR2022 (Under Review)
-
Pytorch implementation of "RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition---arXiv 2021.05.05"
-
Pytorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision---arXiv 2021.05.17"
-
Pytorch implementation of "ResMLP: Feedforward networks for image classification with data-efficient training---arXiv 2021.05.07"
-
Pytorch implementation of "Pay Attention to MLPs---arXiv 2021.05.17"
-
Pytorch implementation of "Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?---arXiv 2021.09.12"
"RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition"
"MLP-Mixer: An all-MLP Architecture for Vision"
"ResMLP: Feedforward networks for image classification with data-efficient training"
"Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?"
-
Pytorch implementation of "RepVGG: Making VGG-style ConvNets Great Again---CVPR2021"
-
Pytorch implementation of "ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks---ICCV2019"
-
Pytorch implementation of "Diverse Branch Block: Building a Convolution as an Inception-like Unit---CVPR2021"
"RepVGG: Making VGG-style ConvNets Great Again"
"ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks"
"Diverse Branch Block: Building a Convolution as an Inception-like Unit"
-
Pytorch implementation of "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications---CVPR2017"
-
Pytorch implementation of "Efficientnet: Rethinking attention_model scaling for convolutional neural networks---PMLR2019"
-
Pytorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition---CVPR2021"
-
Pytorch implementation of "Dynamic Convolution: Attention over Convolution Kernels---CVPR2020 Oral"
-
Pytorch implementation of "CondConv: Conditionally Parameterized Convolutions for Efficient Inference---NeurIPS2019"
"MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
"Efficientnet: Rethinking attention_model scaling for convolutional neural networks"
"Involution: Inverting the Inherence of Convolution for Visual Recognition"
"Dynamic Convolution: Attention over Convolution Kernels"
"CondConv: Conditionally Parameterized Convolutions for Efficient Inference"