SWE-World: Building Software Engineering Agents in Docker-Free Environments
Recent advances in large language models (LLMs) have enabled software engineering agents to tackle complex code modification tasks. Most existing approaches rely on execution feedback from containerized environments, which require dependency-complete setup and physical execution of programs and tests. While effective, this paradigm is resource-intensive and difficult to maintain, substantially complicating agent training and limiting scalability. We propose SWE-World, a Docker-free framework that replaces physical execution environments with a learned surrogate for training and evaluating software engineering agents. SWE-World leverages LLM-based models trained on real agent-environment interaction data to predict intermediate execution outcomes and final test feedback, enabling agents to learn without interacting with physical containerized environments. This design preserves the standard agent-environment interaction loop while eliminating the need for costly environment construction and maintenance during agent optimization and evaluation. Furthermore, because SWE-World can simulate the final evaluation outcomes of candidate trajectories without real submission, it enables selecting the best solution among multiple test-time attempts, thereby facilitating effective test-time scaling (TTS) in software engineering tasks. Experiments on SWE-bench Verified demonstrate that SWE-World raises Qwen2.5-Coder-32B from 6.2% to 52.0% via Docker-free SFT, 55.0% with Docker-free RL, and 68.2% with further TTS. The code is available at https://github.com/RUCAIBox/SWE-World
💡 Research Summary
Paper Overview
The paper introduces SWE‑World, a novel framework that eliminates the need for Docker‑based execution environments when training and evaluating software‑engineering (SWE) agents. Traditional SWE agents rely on containerized, dependency‑complete workspaces to obtain both intermediate execution feedback (e.g., reproducing a failure) and final test results. While effective, this approach incurs heavy costs in data collection, infrastructure management, and test‑time computation, limiting scalability across large‑scale repositories and reinforcement‑learning (RL) pipelines.
Key Idea
SWE‑World separates the agent‑environment interaction loop into two distinct layers:
-
Lightweight sandbox – a deterministic file‑system layer that handles all file‑navigation and editing commands (e.g.,
ls,grep,vim). These operations are inexpensive and can be executed directly without any container. -
LLM‑based surrogate – two learned models trained on real agent‑Docker interaction logs:
- Transition Model (SWT) predicts the outcome of execution‑oriented commands (
execute_bash,run_tests, etc.), providing step‑level feedback such as stdout, stderr, and success flags. - Reward Model (SWR) acts as a virtual test runner at episode termination, taking the final patch as input and outputting a structured test report together with a binary reward (1 if all required unit tests pass, 0 otherwise).
- Transition Model (SWT) predicts the outcome of execution‑oriented commands (
By preserving the standard agent‑environment API, SWE‑World allows agents to operate exactly as they would in a Docker environment, but the costly container instantiation is replaced by fast neural inference.
Methodology
The authors first collect a large corpus of agent‑Docker interaction trajectories from existing SWE‑Gym‑style pipelines. These trajectories contain the agent’s thoughts, actions, and Docker‑generated observations. The data is split to train the transition model (predicting intermediate observations) and the reward model (predicting final test outcomes). During training, the sandbox updates the repository state deterministically, while any execution‑related action is routed to the transition model. When the agent issues a submit action, the reward model evaluates the patch virtually, delivering the same signal that would have been obtained by running the unit‑test suite inside Docker.
Experiments
Experiments are conducted on SWE‑bench Verified, a benchmark of real‑world GitHub issues with dependency‑complete test suites. Two LLM back‑ends are evaluated:
- Qwen2.5‑Coder‑32B (large)
- Qwen3‑4B‑Instruct (smaller)
The authors compare three training regimes:
- Docker‑free Supervised Fine‑Tuning (SFT) – agents are fine‑tuned on trajectories generated by the surrogate environment.
- Docker‑free Reinforcement Learning (RL) – agents are further optimized using PPO‑style RL where rewards come from the surrogate reward model.
- Test‑Time Scaling (TTS) – at inference time, the agent generates multiple candidate patches; the surrogate reward model scores each, and the best‑scoring patch is selected.
Results:
-
Qwen2.5‑Coder‑32B baseline resolve rate (no training) = 6.2 %.
-
After Docker‑free SFT → 52.0 %.
-
After additional Docker‑free RL → 55.0 %.
-
With TTS (8 candidates) → 68.2 %.
-
Qwen3‑4B‑Instruct improves from 25.6 % to 30.0 % with RL, demonstrating that the approach benefits models of various sizes.
These numbers are comparable to, and in some cases surpass, results obtained with full Docker‑based pipelines, while requiring orders of magnitude less compute and storage for environment management.
Scalability Benefits
By removing Docker images from the data pipeline, SWE‑World can ingest a far larger set of GitHub repositories, including those that previously failed to build due to brittle dependency specifications. The storage overhead drops dramatically because only repository source code and test files are needed; no per‑sample container image is stored or transferred. Training large numbers of agents (especially RL, which traditionally spawns many containers per episode) becomes feasible on modest academic clusters.
Limitations & Future Work
The surrogate models are trained on existing Docker feedback, so their accuracy depends on the diversity and quality of the collected logs. Edge cases involving complex native extensions, GPU‑accelerated code, or nondeterministic tests may still be challenging to predict. The authors suggest extending the surrogate with multimodal inputs (e.g., build logs, dependency graphs) and exploring hybrid approaches where a small fraction of high‑value samples are still executed in real containers for periodic calibration.
Conclusions
SWE‑World demonstrates that a learned, Docker‑free environment can effectively replace heavyweight container execution for both supervised and reinforcement learning of software‑engineering agents. The framework retains the full agent‑environment interaction semantics while drastically lowering infrastructure costs, enabling large‑scale data collection, training, and test‑time scaling. The empirical gains on SWE‑bench Verified validate the approach, and the paper opens a promising direction for future research on LLM‑based world modeling in software development automation.
Comments & Academic Discussion
Loading comments...
Leave a Comment