Reading time: 22 minute
...

๐Ÿ“ Original Info

  • Title:
  • ArXiv ID: 2512.22473
  • Date:
  • Authors: Unknown

๐Ÿ“ Abstract

Our companion paper (Paper I) establishes that neural sequence models can implement exact Bayesian inference, with success depending on whether the architecture realizes the required inference primitives: belief accumulation, belief transport, and random-access binding. But how does gradient descent learn to implement these primitives? We provide a systematic first-order analysis of how cross-entropy training reshapes attention scores and value vectors. Our core result is an advantage-based routing gradient for attention scores, coupled with a responsibility-weighted update for values ๐‘ฃ ๐‘— , where ๐‘ข ๐‘– is the upstream gradient at position ๐‘– and ๐›ผ ๐‘– ๐‘— are attention weights. These equations induce a positive feedback loop: queries route more strongly to values that are above-average for their error signal, and those values are pulled toward the queries that use them. We show that this coupled specialization behaves like a two-timescale EM procedure: attention weights implement an E-step (soft responsibilities), while values implement an M-step (responsibility-weighted prototype updates). This EM-like dynamic is what enables the inference primitives: belief accumulation emerges from responsibility-weighted value updates; belief transport emerges from content-dependent routing that tracks evolving states; and random-access binding emerges from the query-key matching that allows retrieval by content. Through controlled simulations, we demonstrate that the same gradient dynamics that minimize cross-entropy also sculpt the low-dimensional Bayesian manifolds observed in Paper I. We further propose an abstract framework for content-based value routing that encompasses both attention and selective state-space models, conjecturing that advantage-based routing dynamics emerge in any architecture satisfying this framework-explaining why transformers and Mamba develop Bayesian geometry while LSTMs do not. * Currently at Google DeepMind. Work performed while at Dream Sports.

๐Ÿ“„ Full Content

Our companion paper (Paper I) establishes that neural sequence models can implement exact Bayesian inference-filtering and hypothesis elimination-in controlled "Bayesian wind tunnels. "

The key finding is that success depends on whether the architecture realizes the required inference primitives: belief accumulation (integrating evidence), belief transport (propagating beliefs through dynamics), and random-access binding (retrieving hypotheses by content). Transformers realize all three primitives; Mamba realizes accumulation and transport; LSTMs realize only accumulation of static sufficient statistics; MLPs realize none. This taxonomy explains the empirical pattern: each architecture succeeds precisely on tasks demanding only the primitives it can implement. But how does gradient descent learn to implement these primitives? Why does cross-entropy training produce the geometric structures required for Bayesian inference-orthogonal key bases, progressive query alignment, low-dimensional value manifolds? This paper as Lemma 2 (Mech-anism). Paper I establishes existence: transformers can implement exact Bayesian inference, and different architectures realize different primitives. This paper establishes mechanism: when an architecture supports a given primitive, standard cross-entropy training induces gradient dynamics that reliably construct the corresponding probabilistic computation. Paper III establishes scaling: these mechanisms persist in production LLMs. We analyze a single-head attention block trained with cross-entropy and derive all first-order gradients with respect to scores ๐‘  ๐‘– ๐‘— , queries ๐‘ž ๐‘– , keys ๐‘˜ ๐‘— , and values ๐‘ฃ ๐‘— . We show that the resulting gradient dynamics implement an implicit EM-like algorithm, and that this EM structure is precisely what enables the inference primitives to emerge.

Our main contributions are:

(1) Complete first-order analysis of attention gradients. We derive closed-form expressions for ๐œ•๐ฟ/๐œ•๐‘  ๐‘– ๐‘— , ๐œ•๐ฟ/๐œ•๐‘ž ๐‘– , ๐œ•๐ฟ/๐œ•๐‘˜ ๐‘— , and ๐œ•๐ฟ/๐œ•๐‘ฃ ๐‘— under cross-entropy loss, in a form that makes their geometric meaning transparent. (2) Advantage-based routing law. We show that score gradients satisfy

where ๐‘ ๐‘– ๐‘— = ๐‘ข โŠค ๐‘– ๐‘ฃ ๐‘— is a compatibility term. Since gradient descent subtracts the gradient, scores decrease for positions whose compatibility exceeds the current attention-weighted mean (positive gradient), and increase for positions below average (negative gradient).

(3) Responsibility-weighted value updates and specialization. Values evolve according to

a responsibility-weighted average of upstream gradients. This induces a positive feedback loop: queries route to values that help them; those values move toward their users, reinforcing routing and creating specialization. (4) Two-timescale EM interpretation. We show that these dynamics implement an implicit EM-like algorithm: attention weights act as soft responsibilities (E-step), values as prototypes updated under those responsibilities (M-step), and queries/keys as parameters of the latent assignment model. Attention often stabilizes early, while values continue to refine-a frameprecision dissociation that matches our empirical observations in wind tunnels and large models. ( 5) Toy experiments and EM vs. SGD comparison. In synthetic tasks, including a sticky Markov-chain sequence, we compare an EM-motivated learning rate schedule (with larger LR for values) to standard SGD. The EM-like schedule reaches low loss, high accuracy, and sharp predictive entropy significantly faster; SGD converges to similar solutions but with slower and more diffuse routing. PCA visualizations of value trajectories reveal emergent low-dimensional manifolds.

Taken together with [1], our results provide a unified story: Gradient descent โ‡’ Bayesian manifolds โ‡’ In-context inference.

Clarification on “Bayesian inference.” Throughout this paper, “Bayesian inference” refers to the Bayesian posterior predictive over latent task variables-not a posterior over network weights. Specifically, we address filtering and hypothesis elimination in tasks whose posteriors factorize sequentially, not general Bayesian model selection. We show that cross-entropy training sculpts geometry that implements these computations over in-context hypotheses.

We analyze a single attention head operating on a sequence of length ๐‘‡ . Indices ๐‘–, ๐‘—, ๐‘˜ run from 1 to ๐‘‡ unless stated otherwise.

At each position ๐‘— we have an input embedding ๐‘ฅ ๐‘— โˆˆ R ๐‘‘ ๐‘ฅ . Linear projections produce queries, keys, and values:

Attention scores, weights, and context vectors are:

The output of the head is passed through an output projection to logits:

and we train with cross-entropy loss

where ๐‘ฆ ๐‘– is the target class at position ๐‘–.

For compactness we define:

Here ๐‘ข ๐‘– is the upstream gradient at position ๐‘–, indicating how ๐‘” ๐‘– should move to reduce the loss. The scalar ๐‘ ๐‘– ๐‘— measures compatibility between the error signal ๐‘ข ๐‘– and value ๐‘ฃ ๐‘— , and E ๐›ผ ๐‘– [๐‘] is the attention-weighted mean compatibility for query ๐‘–.

We now derive all relevant gradients without skipping steps, focusing on forms that reveal their geometric meaning.

For each ๐‘–, the cross-entropy gradient with respect to logits is

This propagates back to the context ๐‘” ๐‘– as

Intuitively, -๐‘ข ๐‘– is the direction in value space along which moving ๐‘” ๐‘– would increase the logit of the correct token and decrease loss.

Since ๐‘” ๐‘– = ๐‘— ๐›ผ ๐‘– ๐‘— ๐‘ฃ ๐‘— , we have

Thus

Geometrically, ๐‘ฃ ๐‘— is pulled toward an attention-weighted average of upstream error vectors from all queries that use it.

Because ๐‘” ๐‘– depends linearly on ๐›ผ ๐‘– ๐‘— , we obtain

The scalar ๐‘ ๐‘– ๐‘— = ๐‘ข โŠค ๐‘– ๐‘ฃ ๐‘— measures the instantaneous compatibility between the upstream gradient ๐‘ข ๐‘– and value vector ๐‘ฃ ๐‘— . Since ๐œ•๐ฟ/๐œ•๐›ผ ๐‘– ๐‘— = ๐‘ ๐‘– ๐‘— , increasing ๐›ผ ๐‘– ๐‘— increases the loss when ๐‘ ๐‘– ๐‘— > 0 and decreases the loss when ๐‘ ๐‘– ๐‘— < 0. Thus, -๐‘ ๐‘– ๐‘— represents the immediate marginal benefit (in loss reduction) of allocating additional attention mass to position ๐‘— for query ๐‘–.

For fixed ๐‘–, the softmax Jacobian is

Applying the chain rule,

Hence

It is convenient to define an advantage quantity with the sign chosen to align with gradient descent:

Under this definition, ๐ด ๐‘– ๐‘— > 0 indicates that increasing ๐‘  ๐‘– ๐‘— (and hence ๐›ผ ๐‘– ๐‘— ) would reduce the loss. Gradient descent therefore increases attention scores toward positions with positive advantage and decreases them otherwise. This is an advantage-style gradient: since gradient descent subtracts the gradient, scores decrease for positions whose compatibility ๐‘ ๐‘–๐‘˜ exceeds the attention-weighted average (positive gradient), and increase for positions below average (negative gradient). Since high ๐‘ ๐‘–๐‘˜ = ๐‘ข โŠค ๐‘– ๐‘ฃ ๐‘˜ indicates ๐‘ฃ ๐‘˜ aligns with the error direction, this reallocates attention away from harmful values toward helpful ones.

Similarly, for fixed ๐‘—,

Thus,

Finally, gradients propagate to ๐‘Š ๐‘„ and ๐‘Š ๐พ as

For learning rate ๐œ‚ > 0, gradient descent gives

The remainder of the paper analyzes the coupled dynamics induced by these updates.

We now unpack the implications of the gradient flows in Section 3, focusing on the interaction between routing (via scores and attention) and content (via values).

Equation ( 23) shows that for fixed query ๐‘–, attention reallocates mass away from positions whose values are worse-than-average (i.e., ๐‘ ๐‘–๐‘˜ > E ๐›ผ ๐‘– [๐‘]) for ๐‘ข ๐‘– and toward those that are better-thanaverage. Recall the advantage ๐ด ๐‘– ๐‘— := -(๐‘ ๐‘– ๐‘— -E ๐›ผ ๐‘– [๐‘]) defined earlier: gradient descent increases attention scores toward positions with positive advantage, implementing an advantage-based routing rule.

Define the attention-weighted upstream signal for column ๐‘—:

From (33),

so ๐‘ฃ ๐‘— moves in the direction that jointly benefits all queries that attend to it, weighted by their attention. In particular, if a small set of queries uses ๐‘— heavily, ๐‘ฃ ๐‘— becomes a prototype that serves that subset. This is the core specialization mechanism: values adapt to the error landscape created by their users.

Since ๐‘” ๐‘– = ๐‘˜ ๐›ผ ๐‘–๐‘˜ ๐‘ฃ ๐‘˜ , an infinitesimal change in ๐‘ฃ ๐‘— induces

Using ๐‘ข ๐‘– = ๐œ•๐ฟ/๐œ•๐‘” ๐‘– , the first-order change in loss from this perturbation is

Interpretation. The factor ๐›ผ ๐‘– ๐‘— says: the more query ๐‘– relies on value ๐‘—, the more ๐‘ฃ ๐‘— ’s update affects ๐‘” ๐‘– . The column weights {๐›ผ ๐‘Ÿ ๐‘— } aggregate contributions from all queries that share ๐‘—. The inner products ๐‘ข โŠค ๐‘– ๐‘ข ๐‘Ÿ measure whether their error directions are aligned (helpful) or opposed (conflicting). If many ๐‘ข ๐‘Ÿ align with ๐‘ข ๐‘– , then ๐‘ฃ ๐‘— can move in a direction that helps them all. If some are anti-aligned, ๐‘ฃ ๐‘— must compromise, and specialization may become harder. Dominant-query approximation. If query ๐‘– is the primary user of key ๐‘— (i.e., ๐›ผ ๐‘– ๐‘— โ‰ซ ๐›ผ ๐‘Ÿ ๐‘— for ๐‘Ÿ โ‰  ๐‘–), then ๐‘Ÿ ๐›ผ ๐‘Ÿ ๐‘— ๐‘ข ๐‘Ÿ โ‰ˆ ๐›ผ ๐‘– ๐‘— ๐‘ข ๐‘– and

Thus, under this approximation, the update decreases loss for query ๐‘– by an amount proportional to ๐›ผ 2 ๐‘– ๐‘— โˆฅ๐‘ข ๐‘– โˆฅ 2 : the stronger the attention and the larger the error, the bigger the immediate improvement.

Equations ( 23) and (33) together form a feedback loop:

(1) If ๐‘ฃ ๐‘— is particularly helpful to ๐‘– (large negative ๐‘ ๐‘– ๐‘— -E ๐›ผ ๐‘– [๐‘], meaning ๐‘ฃ ๐‘— points away from the error direction ๐‘ข ๐‘– ), then ๐‘  ๐‘– ๐‘— increases and ๐›ผ ๐‘– ๐‘— grows. (2) Larger ๐›ผ ๐‘– ๐‘— gives ๐‘ข ๐‘– more weight in ลซ๐‘— , so ๐‘ฃ ๐‘— moves further away from ๐‘ข ๐‘– (since ฮ”๐‘ฃ ๐‘— = -๐œ‚ ลซ๐‘— ).

(3) As ๐‘ฃ ๐‘— moves away from ๐‘ข ๐‘– , ๐‘ ๐‘– ๐‘— = ๐‘ข โŠค ๐‘– ๐‘ฃ ๐‘— decreases further, increasing ๐ด ๐‘– ๐‘— and reinforcing step 1. Repeated application yields specialized value vectors that serve distinct subsets of queries, while attention concentrates on these specialized prototypes.

The coupled dynamics derived above admit a useful analogy to Expectation-Maximization (EM), not as a literal optimization of an explicit latent-variable likelihood, but as a mechanistic correspondence between gradient flows and responsibility-weighted updates. Attention weights behave like soft responsibilities over latent sources, while value vectors act as prototypes updated under those responsibilities. Unlike classical EM, the updates here are driven by upstream gradients rather than observed data, and no standalone likelihood over values is optimized. We emphasize that this is an interpretive framework: we do not derive a surrogate objective with guaranteed monotonic improvement. The value lies in the geometric intuition it provides for understanding specialization dynamics.

For a fixed query ๐‘–, the attention weights can be viewed as the posterior responsibilities of a latent variable ๐‘ ๐‘– indicating which source position ๐‘— is active:

The score gradient

increases responsibilities for positions whose compatibility ๐‘ ๐‘– ๐‘— exceeds the current mean, just as the E-step in EM increases responsibilities for mixture components that explain the data well.

Given responsibilities ๐›ผ ๐‘– ๐‘— , the value update

aggregates feedback from all assigned queries, weighted by their responsibilities. This is directly analogous to the M-step update for cluster means in Gaussian mixture models:

with ๐‘ข ๐‘– playing the role of residuals that need to be explained. A key difference from classical mixture-model EM is the role played by ๐‘ข ๐‘– . In standard EM, centroids are updated using observed data points. Here, ๐‘ข ๐‘– represents the residual error signal backpropagated from the loss. Values therefore move to better explain the error geometry induced by the current routing structure, rather than to maximize a likelihood over inputs. The analogy is structural rather than variational.

In classical EM, the E-step and M-step are separated: responsibilities are recomputed holding parameters fixed, then parameters are updated holding responsibilities fixed. In transformers trained with SGD, these steps are interleaved and noisy, but the first-order picture still resembles EM:

โ€ข E-step (routing): Score gradients adjust ๐‘ž ๐‘– and ๐‘˜ ๐‘— so that attention patterns allocate responsibility to helpful positions. In practice, attention often stabilizes relatively early. โ€ข M-step (values): Value vectors continue to move under residual error signals, refining the content manifold even after attention appears frozen. This continues until calibration and likelihood converge.

Both updates occur simultaneously from each gradient computation. The “two-timescale” behavior emerges from relative gradient magnitudes and coupling structure, not from explicit alternation. We can make this connection more explicit by considering a “manual EM” algorithm that alternates:

(1) E-like step: Forward pass to compute ๐›ผ ๐‘– ๐‘— using current ๐‘ž ๐‘– , ๐‘˜ ๐‘— .

(2) M-like step on values: Update ๐‘ฃ ๐‘— with a closed-form gradient step using ฮ”๐‘ฃ ๐‘— = -๐œ‚ ๐‘– ๐›ผ ๐‘– ๐‘— ๐‘ข ๐‘– while freezing ๐‘ž ๐‘– , ๐‘˜ ๐‘— . (3) Small SGD step on ๐‘Š ๐‘„ ,๐‘Š ๐พ ,๐‘Š ๐‘‚ : Update the nonlinear projection parameters with a small gradient step.

In Section 7.2, we compare such an EM-like schedule to standard SGD on a sticky Markov-chain task and find that both converge to similar solutions, but the EM-like updates reach low loss and sharp, focused attention much faster. Our analysis therefore lives at the EM/SGD level. However, our companion work [1] shows that the point-estimate parameters learned in this way support Bayesian computation in representation space: value manifolds, key frames, and query trajectories implement Bayesian belief updates in context. The present paper explains why cross-entropy and gradient descent naturally create these structures.

We now connect the gradient dynamics derived above to the geometric structures observed in Bayesian wind tunnels and production models.

In wind-tunnel experiments [1], which directly measure the geometric predictions made here:

โ€ข Early in training, attention entropy decreases and attention focuses on relevant hypotheses.

โ€ข Later in training, attention patterns appear stable, but value representations unfurl along a smooth curve; PC1 explains 84-90% of variance, strongly correlated with posterior entropy (|๐‘Ÿ | > 0.9). โ€ข Calibration error continues to drop even as attention maps remain visually unchanged.

Once score gradients have largely equalized compatibility (๐‘ ๐‘– ๐‘— โ‰ˆ E ๐›ผ ๐‘– [๐‘]), ๐œ•๐ฟ/๐œ•๐‘  ๐‘– ๐‘— โ‰ˆ 0 and attention freezes. But unless loss is exactly zero, ๐‘ข ๐‘– remains non-zero and value updates

continue, gradually aligning ๐‘ฃ ๐‘— along the principal directions of the residual error landscape. Under repeated updates, values come to lie on low-dimensional manifolds parameterized by downstream functionals such as posterior entropy.

The gradients to ๐‘˜ ๐‘— show that keys are shaped by advantage signals:

If different subsets of queries consistently find different keys helpful, the corresponding gradient contributions push keys apart in ๐‘˜-space, encouraging approximate orthogonality of distinct hypothesis axes. Our wind-tunnel paper measures exactly this orthogonality: mean off-diagonal cosine similarity drops to 0.05-0.06 versus 0.08 for random vectors, a statistically significant reduction across multiple seeds (๐‘ < 0.001).

The empirically observed frame-precision dissociation-attention stably defining a hypothesis frame while calibration continues to improve-is now easy to interpret:

โ€ข The frame is defined by ๐‘ž ๐‘– , ๐‘˜ ๐‘— and attention weights ๐›ผ ๐‘– ๐‘— . This stabilizes when advantage signals equalize and ๐œ•๐ฟ/๐œ•๐‘  ๐‘– ๐‘— โ‰ˆ 0. โ€ข Precision lives in the arrangement of values ๐‘ฃ ๐‘— along manifolds that support fine-grained likelihood modeling. This continues to refine as long as ๐‘ข ๐‘– is non-zero.

Thus, a late-stage transformer has a fixed Bayesian frame (hypothesis axes and routing) but continues to sharpen its posterior geometry.

We now illustrate the theory with controlled simulations. All experiments use a single-head, single-layer attention block without residual connections or LayerNorm to keep the dynamics transparent.

We first consider a small toy setup with ๐‘‡ = 5, ๐‘‘ ๐‘ฅ = 3, ๐‘‘ ๐‘˜ = 2, ๐‘‘ ๐‘ฃ = 2, and ๐ถ = 3 classes.

Setup.

โ€ข Projection matrices ๐‘Š ๐‘„ ,๐‘Š ๐พ ,๐‘Š ๐‘‰ ,๐‘Š ๐‘‚ are initialized with small Gaussian entries.

โ€ข Inputs ๐‘ฅ ๐‘— are drawn from N (0, ๐ผ ).

โ€ข Targets ๐‘ฆ ๐‘– are random in {0, 1, 2}.

โ€ข We run manual update steps using the closed-form gradients from Section 3, with a fixed learning rate.

Observations. Over โˆผ 100 steps we observe:

(1) Attention heatmaps sharpen: mass concentrates on a few positions per query (Figure 2, Figure 3). ( 2) Value vectors move coherently in a low-dimensional subspace; their trajectories in a PCA projection show emergent manifold structure (Figure 5). (3) Cross-entropy loss decays smoothly (Figure 4), with most gains occurring as specialization emerges.

We next study a more structured task where attention can exploit temporal persistence: a sticky Markov chain over symbols.

Task. We generate sequences of length ๐‘‡ = 2000 over an 8-symbol vocabulary {0, . . . , 7} from a first-order Markov chain with self-transition probability ๐‘ƒ (๐‘ฆ ๐‘ก +1 = ๐‘ฆ ๐‘ก | ๐‘ฆ ๐‘ก ) = 0.3, otherwise transition to a different state with probability proportional to modulo distance of 7 with probability proportional to the distance from the previous state. With this circular distance, the nearby symbol gets higher probability than the distant one. Each symbol ๐‘ฆ ๐‘ก has an associated mean embedding in R 20 ; the input at time ๐‘ก is ๐‘ฅ ๐‘ก = ๐œ‡ ๐‘ฆ ๐‘ก -1 + ๐œ– ๐‘ก with ๐œ– ๐‘ก โˆผ N (0, ๐ผ ), so embeddings carry information about Protocols. We compare two training schemes with causal masking and matched total parameter updates per step:

(1) Standard SGD. Vanilla gradient descent on all parameters with learning rate ๐œ‚ = 0.01.

(2) EM-like schedule. Parameter-specific learning rates: ๐œ‚ ๐‘‰ = 0.1 for value parameters, ๐œ‚ routing = 0.01 for ๐‘Š ๐‘„ ,๐‘Š ๐พ ,๐‘Š ๐‘‚ . All updates are computed from a single forward-backward pass and applied simultaneously. The EM-like schedule uses a larger learning rate for values (10ร—) to accelerate value specialization, compensating for the fact that value gradients are typically smaller in magnitude than routing gradients early in training. Both methods perform one gradient computation per step, with updates applied simultaneously (not alternating) using the respective learning rates.

โ€ข final cross-entropy loss, , Vol. 1, No. 1, Article . Publication date: January . โ€ข final accuracy,

โ€ข predictive entropy,

โ€ข KL(EM โˆฅ SGD) of the predictive distributions.

Results. Table 1 summarizes results across 5 random seeds after 1000 training steps. Figure 6-Figure 8 plot loss, accuracy, and entropy as a function of training step. EM-like training converges significantly faster than SGD: it reaches SGD’s final loss in under half the steps (2.3ร— speedup), with loss and entropy dropping sharply while accuracy rises quickly. This acceleration occurs because the EM-motivated learning rate schedule (with larger LR for values) allows values to specialize more rapidly, whereas uniform SGD must balance routing and content updates at the same rate. Even with this single-layer, single-head architecture, both methods approach the theoretical Bayesian minimum (1.829 bits), with EM converging closer (1.998 vs 2.077 bits entropy). The residual gap of โˆผ0.17 bits from the theoretical minimum likely reflects the limited capacity of a single-head architecture to fully capture the transition structure; concurrent work [1] shows that deeper architectures close this gap to < 0.01 bits on similar tasks. To compare micro behavior of EM and Takeaways. Both EM-like and SGD training ultimately converge toward similar qualitative solutions: specialized values and focused attention. However, the EM-like schedule reaches this state with fewer steps and sharper specialization. This matches the two-timescale story: responsibilities (attention) can be treated as approximately converged, and closed-form value updates can exploit this stability to accelerate manifold formation.

The gradient analysis suggests useful diagnostics and design principles for training and interpreting transformer attention.

โ€ข Compatibility matrix ๐ต = (๐‘ ๐‘– ๐‘— ). Monitoring ๐‘ ๐‘– ๐‘— = ๐‘ข โŠค ๐‘– ๐‘ฃ ๐‘— reveals which values are most helpful to which queries.

). The sign and magnitude predict where attention will strengthen or weaken. โ€ข Column usage ๐‘– ๐›ผ ๐‘– ๐‘— . Low column sums identify underused or dead values.

โ€ข Value norms โˆฅ๐‘ฃ ๐‘— โˆฅ. Norm trajectories can flag potential instability (exploding or vanishing norms).

โ€ข LayerNorm on values can stabilize norms while leaving directional dynamics intact.

โ€ข Attention dropout disrupts the feedback loop, limiting over-specialization and encouraging more evenly used values. โ€ข Learning rate choices modulate the timescale separation between routing and content; smaller learning rates make the first-order picture more accurate.

โ€ข Multi-head attention allows multiple specialized routing manifolds, reducing competition within a single head. โ€ข Depth naturally supports the binding-elimination-refinement hierarchy observed in our wind-tunnel experiments and large models. โ€ข Residual connections help maintain useful intermediate representations even as individual heads specialize strongly.

The gradient dynamics derived in Sections 4 and 5 are specific to softmax attention. However, companion work shows that selective state-space models (Mamba) achieve comparable Bayesian inference through a different routing mechanism-input-dependent gating rather than query-key matching. This raises a natural question: is there a unifying principle that explains why both architectures develop Bayesian geometry? We conjecture that the essential structure is contentbased value routing, and that any mechanism satisfying this property will exhibit similar gradient dynamics under cross-entropy training. The key requirement is that routing weights depend on what is at each position, not just where it is. This excludes:

โ€ข Fixed positional patterns (e.g., “always attend to position ๐‘ก -1”) โ€ข Learned but content-independent weights (e.g., sinusoidal position encodings alone) โ€ข Architectures where gating is independent of cross-position relationships (e.g., standard LSTM gates)

Softmax Attention. The standard transformer attention mechanism is a content-based routing mechanism with:

๐‘ฃ ๐‘— = ๐‘Š ๐‘‰ ๐‘ฅ ๐‘— (46)

Routing weights depend on content through the query-key inner product ๐‘ž โŠค ๐‘– ๐‘˜ ๐‘— .

If Conjectures 9.2 and 9.3 hold generally, they explain why both attention and selective SSMs develop Bayesian geometry:

(1) Routing specialization: Advantage-based updates cause routing to specialize-weights increase for value positions that are “above average” at reducing the loss for each query. This explains the empirical finding from Paper 1: transformers and Mamba both achieve Bayesian inference (0.049 and 0.031 bits MAE on HMM tracking), while LSTMs fail (0.416 bits). The LSTM lacks content-based routing and therefore cannot develop the coupled specialization that produces Bayesian geometry.

A complete theory requires deriving the gradient dynamics for selective SSMs explicitly. The key challenges are:

(1) Sequential dependencies: Unlike attention, SSM gradients must backpropagate through the recurrence โ„Ž ๐‘ก = ๐‘“ (โ„Ž ๐‘ก -1 , ๐‘ฅ ๐‘ก , ฮ” ๐‘ก , ๐ต ๐‘ก ). (2) Coupled parameters: The input-dependent ฮ”, ๐ต, ๐ถ are computed through shared projections, creating coupled dynamics. (3) Continuous vs. discrete: SSMs operate in continuous state-space; the EM analogy may require a continuous relaxation (e.g., variational inference interpretation).

We leave formal derivation for future work, but note that the empirical success of Mamba on Bayesian tasks strongly suggests that similar dynamics emerge. The abstract framework proposed here provides a roadmap for unifying the theory of content-based routing across architectures.

10 Related Work

Several works argue that transformers implement approximate Bayesian inference, either behaviorally or via probing [e.g. 6, 7]. Our companion paper [1] demonstrates exact Bayesian behavior in small wind tunnels, showing that architectures with content-based value routing (transformers and Mamba) succeed while those without it (LSTMs, MLPs) fail. A scaling paper shows similar geometric signatures in production LLMs. The present work explains how gradient dynamics in attention produce these geometries-deriving analogous dynamics for selective state-space models remains future work.

Mechanistic interpretability studies identify specific heads and circuits performing copy, induction, and other algorithmic tasks [3,4]. Our framework complements this line by explaining how specialization arises from the interaction between routing and content, rather than treating specialized heads as primitive.

The implicit bias of gradient descent in linear and deep networks has been extensively studied [2,5]. We extend these ideas to attention: gradient descent implicitly favors representations where routing aligns with error geometry and values lie on low-dimensional manifolds that support Bayesian updating.

Attention Training Dynamics. Recent work has analyzed attention optimization from complementary perspectives. Max-margin analyses show directional convergence of attention parameters toward SVM-like solutions. Two-stage dynamics have been identified where early training exhibits parameter condensation followed by key/query activation. Our advantage-based routing perspective complements these findings: the baseline-subtracted compatibility gradient we derive is the mechanism by which attention reallocates mass, and our two-timescale observation (fast routing, slow values) aligns with reports of stage-wise dynamics. The key difference is our focus on the EMlike interpretation and the connection to Bayesian manifold formation. The responsibility-weighted value updates derived here are reminiscent of neural EM and slot-attention models, where soft assignments drive prototype updates. The key distinction is that, in transformers, responsibilities are computed via content-addressable attention and prototype updates are driven by backpropagated error signals rather than reconstruction likelihoods. Our focus is not on proposing a new EM-style architecture, but on showing how standard cross-entropy training in attention layers induces EM-like specialization dynamics as a consequence of gradient flow.

Our analysis is deliberately minimal and controlled.

First-order approximation. We work in a first-order regime, assuming small learning rates and ignoring higher-order and stochastic effects (e.g., momentum, Adam, mini-batch noise). Extending the analysis to realistic optimizers is an important next step.

Single-head, single-layer focus. We analyze a single head in isolation, without residual pathways or LayerNorm. Multi-head, multi-layer dynamics-including inter-head coordination and hierarchical specialization-remain open.

Finite vs. infinite width. We do not explicitly connect our analysis to the neural tangent kernel or infinite-width limits. Bridging these regimes may help clarify when transformers operate in a feature-learning versus lazy-training mode.

Large-scale empirical validation. Our toy simulations are intentionally small. Applying the diagnostics in Section 8 to full-scale LLM training runs, tracking advantage matrices and manifold formation over time, is a promising direction.

Attention-specific analysis. Our gradient derivations are specific to softmax attention. Companion work shows that selective state-space models (Mamba) also implement Bayesian inference via content-based routing, but through different mechanics-input-dependent gating rather than querykey matching. Deriving analogous gradient dynamics for selective SSMs, and identifying whether a similar EM-like structure emerges, is an important direction for unifying the theory of content-based routing across architectures.

Paper I establishes what architectures can implement Bayesian inference, taxonomized by three inference primitives: belief accumulation, belief transport, and random-access binding. This paper explains how gradient descent learns to implement these primitives. Our key findings are:

, Vol. 1, No. 1, Article . Publication date: January .

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

โ†‘โ†“
โ†ต
ESC
โŒ˜K Shortcut