Deferred Commitment Decoding for Diffusion Language Models

Reading time: 21 minute
...

📝 Original Info

  • Title: Deferred Commitment Decoding for Diffusion Language Models
  • ArXiv ID: 2601.02076
  • Date: 2026-01-05
  • Authors: Yingte Shu, Yuchuan Tian, Chao Xu, Yunhe Wang, Hanting Chen

📝 Abstract

Diffusion language models (DLMs) have recently emerged as a strong alternative to autoregressive models by enabling parallel text generation. To improve inference efficiency and KV-cache compatibility, prior work commonly adopts block-based diffusion, decoding tokens block by block. However, this paradigm suffers from a structural limitation that we term Boundary-Induced Context Truncation (BICT): undecoded tokens near block boundaries are forced to commit without access to nearby future context, even when such context could substantially reduce uncertainty. This limitation degrades decoding certainty and generation quality, especially for tasks requiring precise reasoning, such as mathematical problem solving and code generation. We propose Deferred Commitment Decoding (DCD), a novel, training-free decoding strategy that mitigates this issue. DCD maintains a certainty-aware sliding window over masked tokens, resolving low-uncertainty tokens early while deferring high-uncertainty tokens until sufficient contextual evidence becomes available. Extensive experiments across multiple diffusion language models, benchmarks, and caching configurations show that DCD improves generation accuracy by 1.73% with comparable time on average compared to fixed block-based diffusion methods, with the most significant improvement reaching 16.5%. These results demonstrate that deferring token commitment based on uncertainty is a simple yet effective principle for improving both the quality and efficiency of diffusion language model decoding.

📄 Full Content

Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive models for natural language generation. By decoding tokens in parallel rather than strictly left-to-right, DLMs relax sequential dependencies and enable more flexible generation. Recent models such as NBDiff [Tian et al., 2025] and LLaDA2.0 [Bie et al., 2025] demonstrate that, at comparable scales, DLMs can match their autoregressive counterparts on selected reasoning tasks.

A major challenge in practical DLM inference lies in compatibility with key-value (KV) caching. Vanilla DLMs decode tokens in largely unconstrained orders, which prevents effective reuse of cached attention states and leads to slow inference. To address this issue, block-based diffusion methods have been proposed [Arriola et al., 2025;Nie et al., 2025], partitioning the sequence into blocks that are decoded sequentially while allowing parallel decoding within each block. This semi-autoregressive structure significantly improves KV-cache efficiency and has become a standard design choice in recent DLM systems.

Despite their efficiency benefits, block-based diffusion methods introduce a fundamental limitation, which we refer to as Boundary-Induced Context Truncation (BICT). Once decoding proceeds to the next block, undecoded tokens in the current block are forced to commit, even if nearby future tokens-often only a few positions away-could provide crucial disambiguating context. This issue is particularly detrimental for tokens in semantically critical positions, where insufficient context leads to low-certainty decisions and error propagation. Importantly, this limitation is not caused by an incorrect decoding order but by rigid block boundaries that assume information sufficiency upon block completion.

Our core hypothesis is that decoding quality can be improved by deferring commitment on high-uncertainty tokens until sufficient contextual evidence becomes available, without abandoning the efficiency advantages of block-based decoding. Based on this insight, we propose Deferred Commitment Decoding (DCD), a training-free decoding strategy that replaces fixed block boundaries with a certainty-aware sliding window. Within this window, tokens with low uncertainty are resolved first, while high-uncertainty tokens remain masked and continue to benefit from dynamically expanding context. This mechanism enables localized bidirectional information flow while preserving compatibility with existing caching schemes.

We evaluate DCD on a diverse set of tasks, including mathematical reasoning [Lightman et al., 2023;Cobbe et al., 2021], code generation [Austin et al., 2021b;Chen et al., 2021], and instruction following [Zhou et al., 2023], using multiple diffusion language models [Nie et al., 2025;Ye et al., 2025;Wu et al., 2025a;Tian et al., 2025] and various KV caching configurations. Across all settings, DCD consistently improves generation accuracy by +1.73% with comparable inference time on average compared to fixed block-based diffusion baselines, with the maximum improvement in certain configurations reaching +16.5%. These results establish DCD as a strong state-of-the-art decoding method for DLMs.

We summarize our contributions as follows:

• We identify Boundary-Induced Context Truncation as a key structural limitation of block-based diffusion decoding, which prevents undecoded tokens from leveraging nearby future context across rigid block boundaries.

• We propose Deferred Commitment Decoding, a simple, training-free decoding strategy that dynamically aligns the decoding order with token-level uncertainty using a sliding window.

• We demonstrate that DCD achieves consistent accuracy improvements over fixed block-based diffusion methods across models, tasks, and caching configurations.

2 Related Works

There are two main lines of work that adapt diffusion techniques [Ho et al., 2020] from computer vision to natural lan-guage processing. Continuous diffusion language models [Li et al., 2022;Gong et al., 2022] project discrete language tokens into continuous spaces and apply denoising processes to recover text outputs. In contrast, discrete diffusion language models draw inspiration from masked language modeling [Devlin et al., 2019], gradually recovering masked tokens at predefined generation slots. Compared to continuous approaches, discrete DLMs better align with the inherently discrete nature of language and can be more easily adapted from existing autoregressive models [Gong et al., 2024]; as a result, they have become the dominant paradigm in recent diffusion-based language modeling research. Unless otherwise specified, we use the term DLMs in this paper to refer to discrete diffusion language models. Discrete DLMs typically employ one of two attention mechanisms in the Transformer architecture: semi-causal attention or full attention. Models such as BD3-LM [Arriola et al., 2025], NBDiff [Tian et al., 2025], and Fast-dLLMv2 [Wu et al., 2025a] adopt semi-causal attention, which indicates that their predefined attention maps are incomplete and tokens attend only to their current “block” and previous ones. In contrast, models such as LLaDA [Nie et al., 2025] and Dream [Ye et al., 2025] adopt full attention, allowing each token to condition on the entire sequence during decoding. In this work, we consider both semi-causal and full-attention DLMs to demonstrate the generalization and robustness of the proposed DCD decoding algorithm across different architectural choices.

Earlier works on DLMs [Austin et al., 2021a;Sahoo et al., 2024] randomly unmask and remask a fixed number of tokens at each decoding step, which often yields suboptimal performance. Later approaches incorporate confidence-or entropybased criteria, decoding tokens whose uncertainty falls below a threshold or lies within the top-k candidates. These strategies improve flexibility and parallelism but still rely on fixed decoding ranges.

More recently, a variety of decoding strategies have been introduced to enhance either the performance or computational efficiency of discrete diffusion language models (DLMs). Among training-based approaches, [Xu et al., 2024] employs energy functions to steer the decoding process, yielding a 1.3x speedup alongside notable gains in generation quality. FS-DFM [Monsefi et al., 2025] formulates a discrete flow-matching framework that generates 1024 tokens in just eight sampling steps without degrading perplexity, while SDLM [Liu et al., 2025] adaptively decodes token sequences based on prediction confidence.

For training-free methods, [Fu et al., 2025] introduces an Explore-Then-Exploit scheduling mechanism that maximizes information gain per decoding round to improve efficiency. [Chen et al., 2025] enhances decoding quality by leveraging historical trajectories to inform current predictions, and [Li et al., 2025] proposes early-commit decoding to accelerate inference in DLMs while largely preserving output fidelity. Despite these advances, existing training-free strategies still lack mechanisms to dynamically adjust the decoding horizon or to enrich contextual support for low-certainty tokens, indi-cating significant opportunities for further innovation.

3 Preliminary of DLMs Decoding

Discrete diffusion language models (DLMs) generate a target sequence x = (x 1 , . . . , x T ) by iteratively denoising a partially masked sequence. At diffusion step t, the sequence x (t) contains masked positions denoted by ⟨MASK⟩. The reverse denoising process is modeled as:

where M (t) denotes the set of masked positions at step t.

For full-attention DLMs, each masked token x i is predicted by conditioning on the entire partially decoded sequence x (t) . In contrast, semi-causal DLMs partition the sequence into ordered blocks {B 1 , . . . , B K } and restrict attention such that tokens in block B k are conditioned only on tokens from blocks {B 1 , . . . , B k }. Accordingly, the reverse process can be written as:

≤k denotes the tokens in the first k blocks.

Decoding proceeds by selecting token values for a subset of masked positions according to the model prediction:

where V is the vocabulary set and S (t) ⊆ M (t) specifies the positions eligible for decoding at the current step.

In block-based decoding, the eligibility condition constrains decoding positions to a fixed region, typically the current block: S (t) ⊆ {i | i ∈ B cur }. Full-attention DLMs may optionally adopt block-based decoding to improve KVcache compatibility. In contrast, semi-causal DLMs must employ block-based decoding due to their blockwise attention constraints. Within a large attention block, semi-causal DLMs may further apply sub-block decoding, where decoding positions are restricted to a smaller contiguous region:

where B cur1:cur2 denotes a contiguous subrange of blocks within a larger attention block.

The advantages of DLMs over their autoregressive counterparts lie primarily in their (semi-)bidirectional attention horizons. [Piskorz et al., 2025] found that DLMs exhibit a strong contextual locality bias, in which nearby tokens x [i-ω l :i+ωr] contribute disproportionately to prediction certainty of i-th token. Informally, this can be expressed as:

However, under block-based decoding, tokens after the current block are all ⟨MASK⟩, which contain little information and may even distract the decoding process. For tokens whose contextual locality extends beyond the current block, Equation 4 deteriorates to

where b < i + ω r denotes the right boundary of the current block. We term this reduction in a token’s effective contextual window the Boundary-Induced Context Truncation phenomenon. Although these tokens receive insufficient context, they must be decoded before proceeding to the next block under the block-based paradigm. Consequently, this leads to low-certainty decoding at the end of each block and ultimately degrades the generation performance of DLMs. Based on the above analysis, the primary causes of BICT are rigid block boundaries and the strict left-to-right blockwise decoding order. To address this problem, we must:

(1) Remove the restrictions imposed by rigid block boundaries;

(2) Decode these tokens at the appropriate time and under appropriate conditions. As illustrated in Figure 1, the proposed Deferred Commitment Decoding (DCD) algorithm maintains a sliding window and defers the decoding of low-certainty tokens. It follows two principal design principles to achieve the above goals:

Design (1): The decoding window slides left to right with constraints. The sliding window defines the range of tokens eligible for decoding. It abandons fixed boundaries across consecutive decoding steps; instead, it moves from left

Equation 6 shows that the left endpoint of the window is anchored to the leftmost masked token. Equation 7indicates that the sliding window expands its right boundary as much as possible, subject to two constraints: (1) the total length of the sliding window does not exceed s max , and (2) the number of masked tokens within the window does not exceed s init . In particular, the sliding window is initialized with length s init at the beginning of the generation slot. These constraints maintain a moderate yet flexible window length, which precisely captures relevant contextual information and enables more efficient KV-cache integration.

Design (2): Tokens are deferred from decoding until sufficiently certain given nearby context. Low prediction certainty-measured by the confidence or entropy of masked tokens-serves as a key indicator for identifying BICT-affected tokens with insufficient context. In blockbased decoding, tokens may be forcibly decoded to complete a block. In contrast, the proposed DCD algorithm handles them more gracefully: masked tokens are decoded only when their certainty exceeds threshold τ or ranks among the top in the window.

where the certainty metric C(i) of the i-th token is computed using confidence or negative entropy. This approach significantly reduces low-certainty decoding events at later stages, thereby improving overall performance.

How does the DCD algorithm differ from AdaBlock? The AdaBlock method [Lu et al., 2025] if Equation 9 is hold then 8:

Update r, R (t-1) based on Equation 10,7. Select decoding positions S (t) using Equation 8. Update x (t-1) with S (t) using Equation 3. Update L (t-1) , R (t-1) with x (t-1) [l:r] using Equations 6 and 7. sizes based on delimiter semantics in the generated tokens, improving generation coherence. However, once determined, the block sizes remain fixed; as a result, AdaBlock may still suffer from BICT and thus leaves room for further improvement.

The core design of DCD fits well with full-attention DLMs because their generation slots span the entire sequence, and DCD completely replaces block-based decoding. However, for semi-causal DLMs, the large block size is predefined, and vanilla DCD operates only at the intra-block level by replacing fixed-length sub-block decoding with small sliding windows. Therefore, we propose Dynamic Block Extension (DBE), a patch to DCD for semi-causal DLMs trained with multiple block sizes. Specifically, when a low-certainty token is about to be committed and the window’s sliding is blocked by the right boundary of the large block-i.e., the BICT problem arises due to rigid block boundaries-the following condition holds:

C(i) < τ low and R (t) -L (t) < s max and

When this occurs, DBE aborts the current decoding step and expands the block with an upper limit:

Afterward, the sliding window is recalculated according to Equation 7, and a new decoding step begins to continue the generation loop. Although DBE relies on the DLM’s ability to generalize to variable block sizes, this patch can further enhance performance for models appropriately trained for such flexibility.

To accelerate DLM inference, we integrate prefix and dual caching into the DCD algorithm, following Fast-dLLM [Wu et al., 2025b]. Inspired by dKV-Cache-Greedy [Ma et al., 2025], the active interval without caching is slightly extended beyond the decoded tokens from the current and previous steps. Formally, it is defined as:

We then define the prefix of the generation slot as the tokens preceding W (t) , and the suffix as the tokens following W (t) . The prefix cache temporarily stores the prefix, while the dual cache stores both the prefix and suffix. Additionally, to ensure a fair comparison with block-based cache refreshing, we rebuild the cache after B ′ masked tokens have been decoded since the last cache refresh.

6 Experiments Hyperparameters. All experiments use block size B = 32, sub-block size b = 8 and batchsize = 1. To align with the configurations of baseline works, decoding certainty is measured by negative entropy for NBDiff [Tian et al., 2025] and by confidence for the remaining DLMs [Wu et al., 2025b;Wu et al., 2025a], with a unified threshold τ = 0.9. For fullattention DLMs, we set L = 512, s init = 16, s max = 128, B ′ = 32 and r = 2. For the semi-causal DLM, we set s init = 8 while keeping all other parameters unchanged. The DBE is not enabled in main results and discussed in Section 6.5 with e step = 4, e max = 16 and τ low = 0.4 for confidence and τ low = -0.5 for negative entropy.

Table 1 reports the main experimental results across multiple diffusion language models, tasks, and KV-cache configurations. Overall, Deferred Commitment Decoding (DCD) consistently outperforms block-based and sub-block-based decoding across most settings, demonstrating strong robustness across tasks and model architectures. On average, DCD improves evaluation metrics by +1.73% while reducing decoding time by 4.4% compared to block-based (or sub-blockbased) baselines with regard to the same model, task and cache configuration. The average improvement for each model over the baseline is highlighted in colored text in Table 1. DCD’s gains over (sub-)block-based baselines are observed across diverse tasks-including mathematical reasoning, code generation, and instruction following-and across both full-attention and semi-causal DLMs. Among all models, LLaDA-8B-Instruct, Dream-v0-Instruct-7B, and Dream-v0-Base-7B exhibit significant improvements in accuracy and moderate speedups in time consumption, showing that our DCD algorithm works effectively at the whole-sequence level. NBDiff benefits the most from DCD, achieving the largest average improvement of 5.22%, as it is trained with multiple block sizes and thus aligns well with sliding windows. Running the IFEval benchmark with NBDiff yields the most significant improvement, with a 16.5% increase in accuracy. In contrast, Fast-dLLM-v2-7B shows the smallest improvement of 0.62% because it is trained with limited flexibility with variable decoding ranges. Nevertheless, DCD still outperforms sub-block-based decoding, which aligns with our theoretical analysis.

Table 1: Experimental results. For each experiment, we report its overall metrics (pass@1, accuracy, etc.). We also report the total seconds for running 5 benchmarks within one line. The best result of certain model and task is bolded and second-best is underlined. The (sub-)blockbased decoding serves as DCD’s baseline and other algorithms serve as comparison works. et al., 2025], DCD achieves better performance (average metric improvements of +0.28% for LLaDA-8B-Instruct and +1.20% for Dream-v0-Base-7B) and substantial speedups (average decoding time reductions of 19% and 71%, respectively), demonstrating the effectiveness of the deferred commitment mechanism. Compared with dKV-Cache-Greedy [Ma et al., 2025], DCD substantially outperforms it in terms of accuracy, despite employing a similar KV-cache strategy. Compared with CCD [Chen et al., 2025], which uses a custom cache to store historical data, the best results of DCD surpass it by 1.90% on average. More importantly, DCD outperforms Prophet [Li et al., 2025] by a large margin because the latter algorithm contradicts DCD by performing early decoding, which substantially degrades DLMs’ inference performance.

We note that in a small number of cases, DCD performs slightly worse than (sub-)block-based decoding and other methods. We attribute these regressions to the inherent stochasticity of training-free decoding and the intrinsic difficulty of certain tokens, which may remain ambiguous even with extended context.

Figure 3 provides direct evidence that DCD mitigates Boundary-Induced Context Truncation. We visualize the distribution of decoding confidence on the GSM8K benchmark for LLaDA-8B-Instruct, Dream-v0-Base-7B, and Fast-dLLM-v2-7B. GSM8K is selected because it is the largest benchmark and exhibits the most stable improvements under DCD.

Across all models, DCD substantially reduces the frequency of extremely low-certainty decoding steps compared to block-based or sub-block-based decoding. Such lowcertainty events directly reflect the BICT phenomenon that DCD is designed to address. This reduction provides a clear explanation for the observed accuracy improvements, particularly on reasoning-intensive tasks.

Figure 4 presents ablation studies on LLaDA-8B-Instruct evaluated on MATH500 with dual cache enabled, analyzing the impact of three key hyperparameters in DCD: the maximum window size s max , the initial window size s init , and the certainty threshold τ .

Effect of s max . The maximum window size determines the upper bound of contextual expansion. When s max = 32, the window is constrained to the baseline block size, limiting DCD’s ability to mitigate BICT. When s max = 512 (equivalent to removing the upper bound given L = 512), accuracy degrades due to diluted contextual relevance. Decoding time does not exhibit a consistent monotonic trend with respect to s max , as the window rarely expands to its maximum in practice.

Effect of s init . The initial window size controls early decoding behavior. Setting s init = 8 reduces the available context and may degrade performance, while s init = 32 may lead to excessively long windows, introducing premature commitments near the right boundary. The weak negative correlation between s init and decoding time may result from increased parallelism enabled by larger windows.

Effect of τ . The certainty threshold regulates how aggressively tokens are deferred. Accuracy improves as τ increases from lower values but degrades when τ = 1 (i.e., top-1 confidence decoding), as this setting becomes fully deterministic and loses flexibility. In contrast to window parameters, τ exhibits a clearer positive correlation with decoding time.

Overall, accuracy exhibits a clear unimodal trend as s max , s init , and τ increase in this setting. Based on these ablation results, we select appropriate hyperparameters and apply them consistently across all experiments.

Table 2 reports the performance of DBE-patched DCD across five benchmarks for semi-causal DLMs. Despite minimal computational overhead, NBDiff consistently benefits from DBE across various tasks, achieving an average improvement of 1.14%, as expected. This is because NBDiff’s training data includes generation slots of varying block sizes. In contrast, Fast-dLLM-v2 is trained with a fixed block size of B = 32, and thus fails to benefit from DBE, instead suffering a 1.30% drop in metrics. Overall, we suggest that semi-causal DLMs should be trained with variable block sizes to mitigate the BICT problem and enable the DBE mechanism. The error occurs at decoding step 21, as illustrated in the left part of Figure 2. The ⟨MASK⟩ at a critical position (the rightmost token of line 5) is adjacent to the first block boundary and suffers from truncated context. As a result, it is incorrectly decoded as “tup” with low confidence.

The correct code generated by the DCD method is:

1 def extract_singly(test_tup): The critical decoding step 23 is illustrated in the right part of Figure 2. The DCD algorithm successfully resolves this issue by deferring the decoding of the critical ⟨MASK⟩ until the sliding window incorporates sufficient contextual information, such as “res.append”. Consequently, the model correctly decodes the second occurrence of “res” and successfully passes all test cases.

For LLaDA-8B-Instruct, Dream-v0-Base-7B and Fast-dLLM-v2-7B, we use lm-eval-harness 0.4.8 and the codecleaning suite in the Fast-dLLM codebase. Specifically:

• For HumanEval, we use 0-shot pass@1 as the evaluation metric. For code cleaning, we concatenate the prompt and the generated output, remove possible “‘python … “’ blocks, and extract the function based on Python syntax trees. No prompt engineering is applied in this setting.

• For MBPP, we use 3-shot pass@1 as the evaluation metric. The code-cleaning logic is the same as that for Hu-manEval. Prompt engineering and the few-shot mechanism are automatically handled by the lm-eval-harness library.

• For MATH500, we use 0-shot accuracy as the evaluation metric, with simple chain-of-thought reasoning prompts as the default. The cleaning logic extracts the boxed answer and simplifies the mathematical expression.

You are a math expert. You will be given a question to solve. Solve it step by step. Wrap the final answer in a \boxed{}. Respond in the following format: Your reasoning here boxed{…} {{text}} • For GSM8K, we use 5-shot accuracy as the evaluation metric, corresponding to the “exact match,flexibleextract” value reported in the lm-eval-harness result files. All other settings follow the default configuration.

• For IFEval, we use 0-shot accuracy as the evaluation metric, corresponding to the “prompt level strict acc,none” value reported in the lm-eval-harness result files. All other settings follow the default configuration.

Notably, we do not use any stop words (e.g., “[DONE]” in the default MBPP configuration) in any experiments. This choice may lead to degraded performance for Dream-v0-Base-7B on these tasks, as unaligned models often struggle to terminate generation appropriately and may produce extraneous content after completing the task. However, this setting is applied consistently across all experiments in this paper, ensuring fair comparisons.

For NBDiff, we use zero-shot evaluation with simple or no prompt engineering, as detailed below. The scoring logic is entirely based on https://github.com/open-compass/ opencompass. These configurations are identical to those in the original paper.

• HumanEval:

Complete the following python code: {{text}}

Question: {{text}} Please reason step by step, and put your final answer within \boxed{}.

• IFEval: No additional prompt.

To better understand the DCD method, we record the time consumption for each benchmark experiment. According to Table 3, DCD completes benchmarks in comparable-and even slightly less-time than the baseline method, demonstrating its efficiency relative to traditional approaches. For dKV-Cache-Greedy, CCD, and Prophet, we copy their statistics from the original papers, as their time consumption data are unavailable.

To better understand the BICT phenomenon, we collect the number of low-certainty (confidence < 0.3) decoding steps in each experiment. The results show that the DCD algorithm mitigates the BICT phenomenon by significantly reducing the number of low-certainty decoding steps, thereby improving performance.

The wall-clock time reported earlier is highly dependent on hardware-specific environments and exhibits limited generalization. Therefore, we report two additional metrics to assess the algorithmic efficiency of each experiment:

• Average decoding steps. This metric measures the average number of decoding steps required to complete generation for prompts. It corresponds directly to the average number of forward propagations and constitutes the primary source of time consumption in DLM inference.

• Average forward length. This metric measures the average number of tokens directly fed into the DLM for forward propagation. In some scenarios, certain tokens are cached and thus excluded from this count. Consequently, this metric is positively correlated with both time consumption and memory usage per decoding step.

As shown in Table 5 andTable 6, the average decoding steps of DCD are slightly lower (-5.0% on average) than those of the (sub-)block-based baselines, indicating a modest efficiency gain from its decoding strategy. Meanwhile, the average forward length under DCD is comparable (+6.0% on average) to that of the baselines across all models and tasks. Together, these results suggest that DCD incurs no additional computational overhead-in fact, its total inference time is on par with or slightly less than that of the (sub-)block-based approaches, consistent with the wall-clock time measurements.

aligned. We do not evaluate multiple-choice QA benchmarks[Hendrycks et al., 2020;Rein et al., 2024], as they primarily measure token-level log-probabilities rather than decoding quality.

aligned. We do not evaluate multiple-choice QA benchmarks[Hendrycks et al., 2020;Rein et al., 2024]

aligned. We do not evaluate multiple-choice QA benchmarks

return res

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut