Self-Attention with Relative Position Representations – Paper explained

We help you wrap your head around relative positional embeddings as they were first introduced in the “Self-Attention with Relative Position Representations” paper. Related videos: 📺 Positional embeddings explained: 📺 Concatenated, learned positional encodings: 📺 Transformer explained: Papers 📄: Shaw, Peter, Jakob Uszkoreit, and Ashish Vaswani. “Self-Attention with Relative Position Representations.“ In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 464-468. 2018. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.“ In Advances in neural information processing systems, pp.
Back to Top