You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
why doesn't the architecture need position embedding?
My understanding is that in Restormer, the self-attention module, Multi-Dconv Head Transposed Attention (MDTA), operates along the channel dimension rather than the spatial dimension, so position embedding is not necessary.
why doesn't the architecture need position embedding?
The text was updated successfully, but these errors were encountered: