Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments

Large-Scale Terminal Agentic Trajectory Generation from Dockerized Environments
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Training agentic models for terminal-based tasks critically depends on high-quality terminal trajectories that capture realistic long-horizon interactions across diverse domains. However, constructing such data at scale remains challenging due to two key requirements: \textbf{\emph{Executability}}, since each instance requires a suitable and often distinct Docker environment; and \textbf{\emph{Verifiability}}, because heterogeneous task outputs preclude unified, standardized verification. To address these challenges, we propose \textbf{TerminalTraj}, a scalable pipeline that (i) filters high-quality repositories to construct Dockerized execution environments, (ii) generates Docker-aligned task instances, and (iii) synthesizes agent trajectories with executable validation code. Using TerminalTraj, we curate 32K Docker images and generate 50,733 verified terminal trajectories across eight domains. Models trained on this data with the Qwen2.5-Coder backbone achieve consistent performance improvements on TerminalBench (TB), with gains of up to 20% on TB1.0 and 10% on TB2.0 over their respective backbones. Notably, \textbf{TerminalTraj-32B} achieves strong performance among models with fewer than 100B parameters, reaching 35.30% on TB1.0 and 22.00% on TB2.0, and demonstrates improved test-time scaling behavior. All code and data are available at https://github.com/Wusiwei0410/TerminalTraj.


💡 Research Summary

The paper introduces TerminalTraj, a scalable pipeline for generating high‑quality, executable, and verifiable terminal trajectories from Docker‑enabled GitHub repositories. The authors identify two fundamental challenges in building data for terminal‑based agents: (1) Executability, because each task must run in a suitable Docker environment, and (2) Verifiability, because task outputs are heterogeneous and cannot be judged by a single metric. Existing approaches either rely on rule‑based or LLM‑simulated interactions that are not grounded in real execution, or they filter repositories by simple heuristics (stars, commits) which limits environment diversity.

TerminalTraj addresses these gaps through three tightly coupled stages:

  1. Data Sources Collection – The authors crawl 899,741 GitHub repositories, focusing on those that contain Dockerfiles or explicit build scripts. They also gather auxiliary assets (images, CSVs, videos) and domain‑specific documentation to enrich task contexts. To avoid noisy or incomplete projects, they train a reward model (ScoreModel) that predicts a completeness/executability score for each file. The repository‑level quality score is the average of its files’ scores; repositories scoring below 0.2 are discarded. This model‑based filtering dramatically reduces Docker build failures while preserving language and domain coverage.

  2. Docker Image Curation – For each high‑scoring repository, the pipeline either uses the existing Dockerfile or automatically injects missing dependencies (especially for domain‑specific tools). This results in 32,325 Docker images across eight programming languages (Python, Java, C++, C, Go, JavaScript, PHP, HTML). The overall Docker build success rate is about 17 %, reflecting the stringent quality threshold.

  3. Instance Generation – Within each repository, the system extracts paired .sh (shell) and .md (markdown) files. Using a powerful LLM (Qwen3‑Coder‑480B), it prompts the model to (a) synthesize a user‑facing query that would logically require executing the script, and (b) generate executable validation code in the form of a pytest suite. The validation suite checks concrete side effects (file creation, configuration changes, process output) rather than matching a static answer string, thereby providing state‑based verification.

The pipeline produces 1,030,695 raw task instances; after executing the validation suites, only 50,733 trajectories (≈4.9 %) pass, yielding a high‑precision dataset. The authors then fine‑tune Qwen2.5‑Coder models on this data. Compared to the baseline (training on Nex‑N1 generated data), the fine‑tuned models achieve up to 20 % absolute improvement on TerminalBench 1.0 and 10 % on TerminalBench 2.0. Notably, the 32‑billion‑parameter model (TerminalTraj‑32B) reaches 35.30 % on TB 1.0 and 22.00 % on TB 2.0, outperforming all sub‑100 B models and approaching the performance of the 480‑B Qwen3‑Coder.

A further analysis of scaling behavior shows that models trained with execution‑grounded trajectories exhibit a steeper pass@k curve: TerminalTraj‑7B surpasses the baseline Qwen2.5‑Coder‑7B, and TerminalTraj‑32B attains 63 % at pass@16, exceeding the 480‑B baseline. This suggests that realistic execution feedback provides a more efficient learning signal, enabling better conversion of inference compute into performance gains.

Key contributions and insights:

  • Model‑based repository scoring enables continuous, automated expansion of diverse Docker environments, overcoming the bottleneck of heuristic filtering.
  • Executable validation code ensures that each trajectory is truly solved, eliminating hallucinations and providing strong supervision for long‑horizon reasoning.
  • Execution‑grounded fine‑tuning markedly improves both absolute performance and test‑time scaling, demonstrating the value of realistic interaction data for terminal agents.

The authors discuss future directions such as lowering the quality threshold to increase environment coverage, incorporating multi‑container or cloud‑native scenarios, and improving the generation of validation code via meta‑learning or human‑LLM collaboration. Overall, TerminalTraj offers a practical, reproducible pipeline that can supply the community with large‑scale, execution‑grounded terminal data, paving the way for more capable and reliable AI agents in real‑world system administration, DevOps, and software engineering tasks.


Comments & Academic Discussion

Loading comments...

Leave a Comment