Real time fault detection in 3D printers using Convolutional Neural Networks and acoustic signals

Reading time: 5 minute
...

📝 Original Info

  • Title: Real time fault detection in 3D printers using Convolutional Neural Networks and acoustic signals
  • ArXiv ID: 2602.16118
  • Date: 2026-02-18
  • Authors: ** 정보 없음 (원문에 저자 정보가 제공되지 않음) **

📝 Abstract

The reliability and quality of 3D printing processes are critically dependent on the timely detection of mechanical faults. Traditional monitoring methods often rely on visual inspection and hardware sensors, which can be both costly and limited in scope. This paper explores a scalable and contactless method for the use of real-time audio signal analysis for detecting mechanical faults in 3D printers. By capturing and classifying acoustic emissions during the printing process, we aim to identify common faults such as nozzle clogging, filament breakage, pully skipping and various other mechanical faults. Utilizing Convolutional neural networks, we implement algorithms capable of real-time audio classification to detect these faults promptly. Our methodology involves conducting a series of controlled experiments to gather audio data, followed by the application of advanced machine learning models for fault detection. Additionally, we review existing literature on audio-based fault detection in manufacturing and 3D printing to contextualize our research within the broader field. Preliminary results demonstrate that audio signals, when analyzed with machine learning techniques, provide a reliable and cost-effective means of enhancing real-time fault detection.

💡 Deep Analysis

📄 Full Content

3D printing, also known as additive manufacturing, is a process of creating three-dimensional objects layer by layer from digital files. This technology has evolved significantly since its inception in the 1980s, transforming from a tool for rapid prototyping to a versatile manufacturing method with applications across various industries. [3] Fault detection in 3D printing is an essential process to identify and monitor errors during additive manufacturing, ensuring the production of high-quality, defect-free components [4]. This is particularly crucial in industries like aerospace and automotive, where precision and reliability are paramount [5]. Effective fault detection not only reduces material waste, maintenance costs, and production time but also enhances safety and reliability in critical applications.

Traditional methods such as visual inspection and hardware contact based sensors, have several drawbacks when it comes to fault detection.

Visual inspection by humans, while capable of detecting errors, cannot provide continuous monitoring or real-time correction. This approach is time-consuming, subjective, and prone to human error, especially for complex or small-scale defects. [6] Other hardware sensors are often limited in their ability to detect faults, as they primarily identify large-scale error modalities while failing to capture smaller or more subtle defects. These sensors typically require direct integration with the 3D printer to obtain precise readings, which can be cumbersome and may disrupt the printer’s normal operation. Additionally, the setup and maintenance of these sensors involve tedious procedures, increasing the overall complexity of implementation. Another major drawback is the high cost associated with these sensors and their accompanying amplifiers, which restricts their widespread adoption, particularly in budget-conscious manufacturing environments. Moreover, these conventional methods often lack the comprehensive data richness required for real-time monitoring and feedback, thereby reducing their efficiency in dynamic and complex manufacturing processes.

Similarly, camera-based approaches, despite being datarich and highly versatile, come with their own set of challenges. A single-camera setup may provide only limited visibility of the printing process, potentially missing defects that occur outside its field of view. On the other hand, multicamera systems, while capable of offering broader coverage, introduce additional hurdles such as increased costs, implementation complexities, and sensitivity to environmental factors like lighting conditions, which can significantly affect detection accuracy [6].

Given the shortcomings of traditional fault detection techniques, researchers [16] are actively investigating realtime audio signal analysis as a more efficient and adaptable alternative. This method presents several distinct advantages, with one of the most significant being contactless because audio sensors do not need to be physically attached unlike intrusive sensor-based techniques, hence audio-based monitoring eliminates the need for physical modifications to the printer, making it a non-invasive, cost-effective solution that enhances system flexibility. Acoustic sensors are capable of capturing subtle variations in print patterns by analyzing distinctive sound signatures and extracting relevant features. [8]

Recent advancements include AI-driven approaches, such as convolutional neural networks [5] (CNNs), which classify printing faults automatically, and multi-sensor data acquisition systems that utilize sound, vibration, and current to capture real-time process data. Additionally, improved control and calibration software has refined fault prevention and detection, enabling manufacturers to optimize printing processes and achieve better outcomes. Furthermore, advancements in high-efficiency real-time data acquisition have significantly improved the feasibility of this approach. By leveraging Linux-based systems, researchers can minimize latency and ensure seamless data processing, enabling near-instantaneous fault detection. Audio analysis not only provides deeper insights into acoustic signal behavior but also facilitates the identification of various failure modes throughout the entire additive manufacturing process. Studies have demonstrated the effectiveness of this technique by collecting, preprocessing, and analyzing audio data streams from 3D-printed samples. Researchers have successfully extracted both time-domain and frequencydomain features under varying layer thicknesses and employed sophisticated preprocessing methods such as Harmonic-Percussive Source Separation (HPSS) to isolate and analyze specific audio components. These advancements To further strengthen the application of audio-based fault detection, various machine learning techniques have been explored for analyzing acoustic emission (AE) data in 3D printing, as highlighted by Olowe M et al. [

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut