Placement Semantics for Distributed Deep Learning: A Systematic Framework for Analyzing Parallelism Strategies

Reading time: 1 minute
...

📝 Original Info

  • Title: Placement Semantics for Distributed Deep Learning: A Systematic Framework for Analyzing Parallelism Strategies
  • ArXiv ID: 2601.02311
  • Date: 2026-01-05
  • Authors: Deep Pankajbhai Mehta

📝 Abstract

Training large language models requires distributing computation across many accelerators, yet practitioners select parallelism strategies (data, tensor, pipeline, ZeRO) through trial and error because no unified systematic framework predicts their behavior. We introduce placement semantics: each strategy is specified by how it places four training states (parameters, optimizer, gradients, activations) across devices using five modes (replicated, sharded, sharded-with-gather, materialized, offloaded). From placement alone, without implementation details, we derive memory consumption and communication volume. Our predictions match published results exactly: ZeRO-3 uses 8× less memory than data parallelism at 1.5× communication cost, as reported in the original paper. We prove two conditions (gradient integrity, state consistency) are necessary and sufficient for distributed training to match single-device results, and provide composition rules for combining strategies safely. The framework unifies ZeRO Stages 1-3, Fully Sharded Data Parallel (FSDP), tensor parallelism, and pipeline parallelism as instances with different placement choices.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut