Multi-head self-attention: Revision history

Diff selection: Mark the radio buttons of the revisions to compare and hit enter or the button at the bottom.
Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.

18 March 2023

  • curprev 13:2313:23, 18 March 2023Walle talk contribs 3,303 bytes +3,303 Created page with "{{see also|Machine learning terms}} ==Introduction== Multi-head self-attention is a core component of the Transformer architecture, a type of neural network introduced by Vaswani et al. (2017) in the paper "Attention Is All You Need". This mechanism allows the model to capture complex relationships between the input tokens by weighing their importance with respect to each other. The multi-head aspect refers to the parallel attention computations performed on differen..."