A Small-Scale System for Autoregressive Program Synthesis Enabling Controlled Experimentation
What research can be pursued with small models trained to complete true programs? Typically, researchers study program synthesis via large language models (LLMs) which introduce issues such as knowing what is in or out of distribution, understanding fine-tuning effects, understanding the effects of tokenization, and higher demand on compute and storage to carry out experiments. We present a system called Cadmus which includes an integer virtual machine (VM), a dataset composed of true programs of diverse tasks, and an autoregressive transformer model that is trained for under $200 of compute cost. The system can be used to study program completion, out-of-distribution representations, inductive reasoning, and instruction following in a setting where researchers have effective and affordable fine-grained control of the training distribution and the ability to inspect and instrument models. Smaller models working on complex reasoning tasks enable instrumentation and investigations that may be prohibitively expensive on larger models. To demonstrate that these tasks are complex enough to be of interest, we show that these Cadmus models outperform GPT-5 (by achieving 100% accuracy while GPT-5 has 95% accuracy) even on a simple task of completing correct, integer arithmetic programs in our domain-specific language (DSL) while providing transparency into the dataset’s relationship to the problem. We also show that GPT-5 brings unknown priors into its reasoning process when solving the same tasks, demonstrating a confounding factor that prevents the use of large-scale LLMs for some investigations where the training set relationship to the task needs to be fully understood.
💡 Research Summary
The paper introduces Cadmus, a low‑cost, small‑scale system for autoregressive program synthesis that enables tightly controlled experimentation. Cadmus consists of three tightly integrated components: (1) an integer‑only stack‑based virtual machine (VM) with a fixed 65‑token instruction set, where each instruction is a single character (e.g., digits, ‘+’, ‘‑’, ‘*’, ‘/’, ‘%’, ‘!’, ‘#’, ‘L’, ‘?’). This design eliminates tokenization complexities and guarantees that any token sequence is a syntactically valid program that either evaluates to true values or to a designated false value (zero or NAN). (2) a curated dataset of “true‑programs” – programs that produce at least one true output when executed. The dataset is generated from a set of templates covering basic arithmetic, comparisons, sub‑routine calls, sequence labeling, and more, yielding 80 million samples (≈10 M–15 M per sub‑task). Templates are designed to gradually increase difficulty, allowing the model to learn program characteristics step‑by‑step. (3) an 18‑layer decoder‑only transformer (Cadmus‑280M‑80M‑v1) with 280 M parameters, 1280‑dimensional embeddings, 20 attention heads, and a 3600‑dimensional MLP. Training uses Adam (lr = 1e‑4, cosine decay) for 300 k steps with batch size 1024 on eight H100 GPUs, costing less than $200 in compute.
Experimental evaluation focuses on two questions. First, can a modest model outperform a state‑of‑the‑art large language model on program completion? On the Cadmus DSL, the 280 M model achieves 100 % accuracy, while GPT‑5 reaches 95 % under the same conditions. When the instruction symbols are deliberately remapped (the “alternate form”), GPT‑5’s performance collapses (≈0.5 % accuracy), whereas Cadmus remains unaffected because its token‑to‑operation mapping is fixed by design. This demonstrates that GPT‑5 relies on priors learned from massive natural‑language and code corpora, which act as hidden biases that can confound controlled studies.
Second, how does the model internally represent computed values? The authors probe the final transformer layer with logistic regression classifiers. The classifiers can predict the intermediate numeric values with 70‑90 % accuracy during token generation. Accuracy dips when the second operand is being computed and recovers after the comparison operation, suggesting that the model temporarily “forgets” the first operand but retains enough contextual information to reconstruct it later. Out‑of‑distribution (OOD) tests—values never seen during training—show a sharp accuracy drop, indicating that Cadmus learns the distribution of operations rather than exact arithmetic rules.
The paper highlights several advantages of Cadmus: (i) complete control over data generation and tokenization, enabling reproducible experiments; (ii) extremely low compute cost, making it accessible to many research groups; (iii) transparency of the model’s internal state, facilitating studies of program induction, discrete diffusion, curriculum learning, and OOD generalization. Limitations include (a) reduced accuracy (<96 %) on more compositional tasks, showing that generalization is still imperfect; (b) the VM’s narrow focus on integer arithmetic limits applicability to real‑world programming languages; and (c) the lack of large‑scale pretraining means the model cannot yet handle complex algorithmic reasoning without architectural extensions.
In conclusion, Cadmus embodies the “small model, big control” paradigm. By providing a cheap, fully observable platform for program synthesis, it opens a path for systematic scientific inquiry into the mechanics of code generation, something that is difficult to achieve with opaque, massive LLMs. The authors release the VM verifier, template generators, full 65‑instruction specification, the trained model, and training code to encourage further research even under limited resources.
Comments & Academic Discussion
Loading comments...
Leave a Comment