An Architecture for Remote Container Builds and Artifact Delivery Using a Controller-Light Jenkins CI/CD Pipeline
Resource-intensive builds are often executed directly on the controller by conventional Jenkins installations, which can lower reliability and overload system resources. Jenkins functions as a containerized controller with persistent volumes in the controller-light CI/CD framework presented in this paper, delegating difficult build and packaging tasks to a remote Docker host. The controller container maintains secure SSH connections to remote compute nodes while focusing solely on orchestration and reporting. Atomic deployments with time-stamped backups, containerized build environments, immutable artifact packaging, and automated notifications are all included in the system. Faster build throughput, reduced CPU and RAM consumption on the controller, and reduced artifact delivery latency are all revealed by experimental evaluation. For small and medium-sized DevOps businesses looking for scalable automation without adding orchestration complexity, this method offers a repeatable, low-maintenance solution.
💡 Research Summary
The paper presents a “controller‑light” Jenkins CI/CD architecture that isolates the Jenkins controller from resource‑intensive build and packaging tasks. The controller runs inside a Docker container with persistent volumes for configuration, plugins, and build history, and communicates with a remote build host over SSH. All compilation, testing, and artifact creation are performed inside short‑lived Docker containers on the remote host, using pre‑configured images that contain the necessary toolchains (e.g., OpenJDK + Maven for backend, Node.js + npm for frontend). After a successful build, the artifacts are assembled into a timestamped bundle that includes branch and commit metadata, and a checksum manifest is generated for integrity verification.
The deployment stage follows an atomic promotion strategy: the current service directory is backed up, the new artifact is unpacked into a fresh directory, and a symlink or pointer is switched to the new version. If health checks fail, the system automatically rolls back to the previous backup, ensuring minimal downtime. Notifications containing commit details, download links, and diagnostic information are sent via email, and comprehensive logs from both the controller and remote host are retained for audit purposes.
Security is enforced through Docker‑secret‑mounted SSH keys, limited command execution on the remote host, and network segmentation that isolates the controller plane from the compute plane. This reduces the attack surface and prevents lateral movement.
Experimental evaluation compares the proposed architecture against a traditional Jenkins setup where the controller and build agents share the same host. Metrics collected include CPU and memory usage (via Docker stats), disk workspace consumption, per‑stage execution times, and overall pipeline duration. Results show a 45‑52 % reduction in controller CPU utilization, over 40 % reduction in memory usage, and a roughly 30 % decrease in total pipeline time (from 3 minutes 4 seconds to about 2 minutes 10 seconds). The remote execution also eliminates workspace bloat, as only final artifacts are transferred back to the controller. Notably, frontend npm builds that previously hung due to memory contention inside the controller container completed reliably when run in isolated remote containers.
The contributions of the work are fourfold: (1) a clear separation of orchestration and compute responsibilities, making the Jenkins controller lightweight; (2) a remote, ephemerally‑containerized build model that guarantees reproducibility and environment isolation; (3) immutable, timestamp‑based artifact packaging combined with atomic deployment and automated rollback; and (4) empirical evidence of significant resource savings and performance gains.
Limitations include reliance on a single remote build host, which may become a single point of failure; manual management of Docker image versions and dependency caches; and the need for SSH key distribution and network configuration. Future research directions suggested are multi‑host load balancing, shared image layer caching, integration with GitOps‑style declarative infrastructure, and extending the model to support Kubernetes‑based orchestration for larger scale deployments.
Overall, the paper offers a practical, low‑complexity solution for small‑ to medium‑sized DevOps teams seeking to improve CI/CD scalability, reliability, and maintainability without adopting heavyweight orchestration platforms.
Comments & Academic Discussion
Loading comments...
Leave a Comment