Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models

Reading time: 1 minute
...

📝 Original Info

  • Title: Text to Robotic Assembly of Multi Component Objects using 3D Generative AI and Vision Language Models
  • ArXiv ID: 2511.02162
  • Date: 2025-11-04
  • Authors: 정보 제공되지 않음 (논문에 저자 정보가 명시되지 않음)

📝 Abstract

Advances in 3D generative AI have enabled the creation of physical objects from text prompts, but challenges remain in creating objects involving multiple component types. We present a pipeline that integrates 3D generative AI with vision-language models (VLMs) to enable the robotic assembly of multi-component objects from natural language. Our method leverages VLMs for zero-shot, multi-modal reasoning about geometry and functionality to decompose AI-generated meshes into multi-component 3D models using predefined structural and panel components. We demonstrate that a VLM is capable of determining which mesh regions need panel components in addition to structural components, based on the object's geometry and functionality. Evaluation across test objects shows that users preferred the VLM-generated assignments 90.6% of the time, compared to 59.4% for rule-based and 2.5% for random assignment. Lastly, the system allows users to refine component assignments through conversational feedback, enabling greater human control and agency in making physical objects with generative AI and robotics.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut