Self-attention vector output similarities reveal how machines pay attention

Self-attention vector output similarities reveal how machines pay attention
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The self-attention mechanism has significantly advanced the field of natural language processing, facilitating the development of advanced language-learning machines. Although its utility is widely acknowledged, the precise mechanisms of self-attention underlying its advanced learning and the quantitative characterization of this learning process remains an open research question. This study introduces a new approach for quantifying information processing within the self-attention mechanism. The analysis conducted on the BERT-12 architecture reveals that, in the final layers, the attention map focuses on sentence separator tokens, suggesting a practical approach to text segmentation based on semantic features. Based on the vector space emerging from the self-attention heads, a context similarity matrix, measuring the scalar product between two token vectors was derived, revealing distinct similarities between different token vector pairs within each head and layer. The findings demonstrated that different attention heads within an attention block focused on different linguistic characteristics, such as identifying token repetitions in a given text or recognizing a token of common appearance in the text and its surrounding context. This specialization is also reflected in the distribution of distances between token vectors with high similarity as the architecture progresses. The initial attention layers exhibit substantially long-range similarities; however, as the layers progress, a more short-range similarity develops, culminating in a preference for attention heads to create strong similarities within the same sentence. Finally, the behavior of individual heads was analyzed by examining the uniqueness of their most common tokens in their high similarity elements. Each head tends to focus on a unique token from the text and builds similarity pairs centered around it.


💡 Research Summary

**
This paper presents a quantitative analysis of the self‑attention mechanism in the BERT‑12 model by focusing on the output vectors of each attention head rather than the traditional attention weight matrices. The authors compute a “context similarity matrix” (CSM) for every head and layer by taking the dot product of the unnormalized 128‑dimensional token vectors produced by that head. This matrix captures how similar the representations of any two tokens are after the self‑attention operation, providing a direct view into the information that actually propagates through the transformer.

Key findings are as follows:

  1. Layer‑wise evolution of similarity patterns – In the early layers (1‑3) high‑similarity pairs are spread across the entire sequence, indicating that the model initially builds global, coarse‑grained contextual representations. As we move to middle layers (4‑8) the similarity matrix becomes increasingly diagonal; most high‑similarity pairs involve tokens that are close together (distance < 50 tokens). This reflects a shift toward learning local syntactic and semantic dependencies. In the final layers (9‑12) the similarity concentrates not only on the diagonal but also on columns/rows corresponding to sentence‑separator tokens such as `

Comments & Academic Discussion

Loading comments...

Leave a Comment