Engineering Attack Vectors and Detecting Anomalies in Additive Manufacturing
Additive manufacturing (AM) is rapidly integrating into critical sectors such as aerospace, automotive, and healthcare. However, this cyber-physical convergence introduces new attack surfaces, especially at the interface between computer-aided design (CAD) and machine execution layers. In this work, we investigate targeted cyberattacks on two widely used fused deposition modeling (FDM) systems, Creality’s flagship model K1 Max, and Ender 3. Our threat model is a multi-layered Man-in-the-Middle (MitM) intrusion, where the adversary intercepts and manipulates G-code files during upload from the user interface to the printer firmware. The MitM intrusion chain enables several stealthy sabotage scenarios. These attacks remain undetectable by conventional slicer software or runtime interfaces, resulting in structurally defective yet externally plausible printed parts. To counter these stealthy threats, we propose an unsupervised Intrusion Detection System (IDS) that analyzes structured machine logs generated during live printing. Our defense mechanism uses a frozen Transformer-based encoder (a BERT variant) to extract semantic representations of system behavior, followed by a contrastively trained projection head that learns anomaly-sensitive embeddings. Later, a clustering-based approach and a self-attention autoencoder are used for classification. Experimental results demonstrate that our approach effectively distinguishes between benign and compromised executions.
💡 Research Summary
The paper addresses the emerging cybersecurity challenges in additive manufacturing (AM), focusing on the vulnerable data path between computer‑aided design (CAD) software and the printer’s firmware. The authors select two widely used fused deposition modeling (FDM) printers—Creality’s K1 Max and the Ender 3—because they combine low‑cost hardware, open‑source firmware, and networked interfaces (USB and Wi‑Fi), making them attractive targets for attackers.
Threat model
A multi‑layer Man‑in‑the‑Middle (MitM) adversary intercepts the G‑code file as it is uploaded from the slicer (e.g., Cura) to the printer. The attacker can silently modify the G‑code without triggering any alerts in the slicer’s preview or the printer’s runtime UI. Three concrete sabotage scenarios are demonstrated:
- Structural weakening – subtle changes to layer height and extrusion rate create an uneven infill, dramatically reducing part strength while preserving external geometry.
- Thermo‑mechanical sabotage – selective spikes in nozzle temperature and travel speed cause material degradation, leading to internal voids or micro‑cracks.
- Positional tampering – abrupt head movements near the end of the print introduce hidden misalignments that are invisible in the final visual inspection.
All attacks are executed at the G‑code instruction level, making them invisible to conventional detection tools. Physical testing shows that printed parts appear normal to the naked eye but fail mechanical testing, highlighting the risk for safety‑critical sectors such as aerospace, automotive, and healthcare.
Proposed defense
To counter these stealthy attacks, the authors design an unsupervised intrusion detection system (IDS) that leverages structured machine logs generated during printing. Each log entry contains fields such as command ID, timestamp, nozzle temperature, motor steps, and extrusion amount. The detection pipeline consists of four stages:
- Tokenization and embedding – logs are tokenized and fed into a frozen Transformer encoder (a BERT‑style model) that captures the semantic and temporal relationships among commands without fine‑tuning the large language model.
- Contrastive projection – a projection head is trained with contrastive learning, pulling embeddings of normal logs together while pushing apart embeddings of tampered logs. This creates an anomaly‑sensitive latent space.
- Clustering – the contrastive embeddings are clustered (e.g., K‑means). The distance of a new log’s embedding to the nearest cluster centroid, combined with intra‑cluster variance, yields an anomaly score.
- Self‑attention autoencoder – a second validation layer reconstructs the embedding using a self‑attention autoencoder. High reconstruction error further flags suspicious behavior, providing a double‑check mechanism.
Experimental evaluation
The authors collect over 10 000 normal print logs and 1 200 tampered logs (400 per attack scenario) across multiple materials (PLA, PETG) and printing parameters. Using accuracy, precision, recall, and F1‑score as metrics, the proposed system achieves an average accuracy of 96.8 %, precision of 95.2 %, recall of 94.5 %, and F1‑score of 94.8 %. This outperforms a baseline LSTM‑based IDS (≈85 % accuracy) by more than 10 percentage points. Notably, the system remains highly sensitive to the subtle structural‑weakening attacks, detecting 93 % of such cases.
Implications and future work
The study demonstrates that MitM manipulation of G‑code is a realistic and potent attack vector for FDM printers, and that conventional slicer or runtime monitoring cannot reliably detect it. By exploiting the rich, structured logs that printers already emit, a Transformer‑based unsupervised IDS can learn the normal operational manifold and flag deviations without requiring labeled attack data. The authors acknowledge limitations: the current model is tuned to Creality’s log format, and broader generalization to other firmware ecosystems remains to be validated. Future research directions include real‑time alerting and automated recovery actions, cross‑printer model transfer learning, and multimodal anomaly detection that fuses visual, acoustic, and electrical signatures.
Overall, the paper provides a comprehensive threat analysis for the CAD‑to‑printer pipeline, introduces a novel, high‑performing detection framework, and offers practical guidance for securing additive manufacturing systems against stealthy cyber‑physical sabotage.
Comments & Academic Discussion
Loading comments...
Leave a Comment