LeViT Attention Block is a module used for attention in the LeViT architecture. Its main feature is providing positional information within each attention block, i.e. where we explicitly inject relative position information in the attention mechanism. This is achieved by adding an attention bias to the attention maps.
Source: LeViT: a Vision Transformer in ConvNet's Clothing for Faster InferencePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Anomaly Detection | 1 | 33.33% |
General Classification | 1 | 33.33% |
Image Classification | 1 | 33.33% |
Component | Type |
|
---|---|---|
1x1 Convolution
|
Convolutions | |
Batch Normalization
|
Normalization | |
Convolution
|
Convolutions | |
Hard Swish
|
Activation Functions | |
Softmax
|
Output Functions |