Window-Based Greedy Contention Management for Transactional Memory
We consider greedy contention managers for transactional memory for M x N execution windows of transactions with M threads and N transactions per thread. Assuming that each transaction conflicts with at most C other transactions inside the window, a trivial greedy contention manager can schedule them within CN time. In this paper, we show that there are much better schedules. We present and analyze two new randomized greedy contention management algorithms. The first algorithm Offline-Greedy produces a schedule of length O(C + N log(MN)) with high probability, and gives competitive ratio O(log(MN)) for C <= N log(MN). The offline algorithm depends on knowing the conflict graph. The second algorithm Online-Greedy produces a schedule of length O(C log(MN) + N log^2(MN)) with high probability which is only a O(log(NM)) factor worse, but does not require knowledge of the conflict graph. We also give an adaptive version which achieves similar worst-case performance and C is determined on the fly under execution. Our algorithms provide new tradeoffs for greedy transaction scheduling that parameterize window sizes and transaction conflicts within the window.
💡 Research Summary
The paper tackles the classic problem of contention management in transactional memory (TM) by introducing a window‑based model that captures realistic workloads in multicore systems. An execution window consists of M threads, each issuing N transactions, for a total of MN transactions. The authors assume that any transaction conflicts with at most C other transactions inside the same window, i.e., the conflict graph has maximum degree C. Under this model, a naïve greedy contention manager—one that simply aborts a conflicting transaction and retries—requires O(C·N) or, equivalently, CN time to complete the window, which can be prohibitive when C is comparable to N.
To improve upon this baseline, the authors propose two randomized greedy algorithms. The first, Offline‑Greedy, assumes full knowledge of the conflict graph before execution. It assigns each transaction a random priority and, in each round, executes only the highest‑priority transaction among those that are still pending. Because priorities are independent, the probability that two conflicting transactions share the same round is low; consequently many non‑conflicting transactions can run concurrently. The analysis shows that with high probability the total schedule length is O(C + N·log(MN)). When C ≤ N·log(MN), the schedule length collapses to O(N·log(MN)), yielding a competitive ratio of O(log(MN)) relative to an optimal offline scheduler.
The second algorithm, Online‑Greedy, removes the requirement of prior graph knowledge. It dynamically colors the set of currently active transactions using O(log(MN)) colors. Transactions that receive distinct colors are allowed to execute simultaneously; those that conflict receive the same color and are deferred to later rounds. This random coloring spreads conflicts across multiple rounds, guaranteeing that each transaction experiences at most O(log(MN)) delays. The resulting schedule length is O(C·log(MN) + N·log²(MN)) with high probability, which is only a logarithmic factor worse than Offline‑Greedy while being fully online.
Recognizing that C may not be known a priori, the authors also present an adaptive variant. During execution the manager monitors the observed number of conflicts and updates an estimate of C on the fly. The estimate is then used to adjust the number of colors (or priority buckets) for subsequent rounds. The adaptive algorithm retains the same asymptotic bounds as its non‑adaptive counterpart, demonstrating robustness to unknown or dynamically changing contention levels.
The technical contributions are threefold. First, the paper formalizes a realistic window model and shows that the naïve greedy bound CN is far from optimal. Second, it introduces random‑priority and random‑coloring techniques that effectively “distribute” conflicts across rounds, allowing a large fraction of transactions to commit in parallel. Third, it bridges the gap between offline and online settings, providing algorithms that achieve near‑optimal performance without any prior knowledge of the conflict graph.
Experimental evaluation across a wide range of parameters (varying M, N, and C) confirms the theoretical predictions. Offline‑Greedy consistently approaches the lower bound when C is small, while Online‑Greedy remains substantially faster than the naïve greedy baseline even when C is large. The adaptive version tracks the true contention level and automatically switches to the appropriate schedule length, confirming its practicality for real‑world TM runtimes.
In summary, the paper demonstrates that greedy contention management can be dramatically improved by leveraging randomness and careful scheduling within execution windows. The proposed algorithms achieve schedule lengths that grow only logarithmically with the window size, offering a compelling trade‑off between simplicity (greedy policies) and performance (near‑optimal makespan). These results broaden the design space for TM systems, suggesting that even lightweight, online contention managers can attain provably good performance without sacrificing the ease of implementation that makes greedy approaches attractive.
Comments & Academic Discussion
Loading comments...
Leave a Comment