A.R.I.S.: Automated Recycling Identification System for E-Waste Classification Using Deep Learning
Traditional electronic recycling processes suffer from significant resource loss due to inadequate material separation and identification capabilities, limiting material recovery. We present A.R.I.S. (Automated Recycling Identification System), a low-cost, portable sorter for shredded e-waste that addresses this efficiency gap. The system employs a YOLOx model to classify metals, plastics, and circuit boards in real time, achieving low inference latency with high detection accuracy. Experimental evaluation yielded 90% overall precision, 82.2% mean average precision (mAP), and 84% sortation purity. By integrating deep learning with established sorting methods, A.R.I.S. enhances material recovery efficiency and lowers barriers to advanced recycling adoption. This work complements broader initiatives in extending product life cycles, supporting trade-in and recycling programs, and reducing environmental impact across the supply chain.
💡 Research Summary
**
The paper introduces A.R.I.S. (Automated Recycling Identification System), a low‑cost, portable sorting platform that uses deep‑learning‑based computer vision to classify shredded electronic waste (e‑waste) into three primary material categories: metals, plastics, and circuit boards. The authors argue that conventional e‑waste recycling suffers from substantial resource loss because mechanical and sensor‑based separation methods can only distinguish broad material groups and often require expensive, highly calibrated equipment that is inaccessible to smaller recyclers. To bridge this gap, A.R.I.S. integrates a state‑of‑the‑art YOLOx object detector with a programmable logic controller (PLC) and a pneumatic paddle sorter, delivering real‑time, high‑throughput material diversion.
Dataset and Annotation
A proprietary dataset was built from shredded desktop and portable computers sourced from manufacturing facilities. After manual removal of batteries and pre‑sorting of components, the material was shredded using a 1‑inch screen, yielding particles small enough to be captured by three synchronized RGB cameras. The cameras (Basler acA1920‑155uc) produce a stitched image of 5760 × 1200 pixels, which the authors split into three 1920 × 1200 segments, each resized to 640 × 640 for model input. The final corpus contains 6 000 images with 15 500 annotated instances (≈5 000 metals, 5 500 circuit boards, 5 000 plastics). Annotation followed the YOLO format and employed a semi‑automated pipeline: an initial manually‑labeled subset trained a provisional model, which then auto‑annotated the remaining images; human reviewers corrected false negatives and missed detections, iterating until the full set was labeled. Data augmentation (brightness, rotation, noise) was applied to improve robustness against lighting and pose variations.
Hardware Architecture
The physical system comprises:
- Imaging – three Basler cameras mounted 550 mm above a 64‑inch conveyor belt, each delivering up to 155 FPS. Uniform illumination is provided by a 1.6 m LED light bar with diffusers positioned at a 30° angle.
- Material Delivery – a vibratory feeder distributes shredded particles into a single‑layer, green‑colored belt moving at ~1.2 m/s, ensuring each fragment is fully visible.
- Sorting Mechanism – a repurposed agricultural pneumatic paddle sorter with 64 paddles spaced 1 inch apart. Each paddle actuates in 20 ms and returns in another 20 ms, allowing a maximum of 25 actuations per second.
- Control – a Siemens S7‑1200 PLC serves as an OPC‑UA server, handling real‑time I/O for the paddles and receiving inference results from a Mac mini (edge compute). A FIFO queue and finite‑state machine guarantee deterministic timing.
- Computation – the Mac mini runs the YOLOx model via CoreML, interfacing with the cameras through Thunderbolt and communicating with the PLC over Ethernet.
Model and Inference Pipeline
YOLOx was selected for its anchor‑free design and decoupled classification/regression heads, which are well‑suited for irregularly shaped e‑waste fragments. The model was initialized with COCO pretrained weights and fine‑tuned on the custom dataset. During inference, the stitched frame is partitioned, each segment resized, and all three segments are batched together (
Comments & Academic Discussion
Loading comments...
Leave a Comment