Ceci N'est Pas un Drone: Investigating the Impact of Design Representation on Design Decision Making When Using GenAI

Ceci N'est Pas un Drone: Investigating the Impact of Design Representation on Design Decision Making When Using GenAI
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

With generative AI-powered design tools, designers and engineers can efficiently generate large numbers of design ideas. However, efficient exploration of these ideas requires designers to select a smaller group of potential solutions for further development. Therefore, the ability to judge and evaluate designs is critical for the successful use of generative design tools. Different design representation modalities can potentially affect designers’ judgments. This work investigates how different design modalities, including visual rendering, numerical performance data, and a combination of both, affect designers’ design selections from AI-generated design concepts for Uncrewed Aerial Vehicles. We found that different design modalities do affect designers’ choices. Unexpectedly, we found that providing only numerical design performance data can lead to the best ability to select optimal designs. We also found that participants prefer visually conventional designs with axis-symmetry. The findings of this work provide insights into the interaction between human users and generative design systems.


💡 Research Summary

This paper investigates how the modality of design representation influences designers’ ability to select optimal concepts generated by generative AI (GenAI) in the context of Uncrewed Aerial Vehicles (UAVs). The authors focus on three presentation conditions: (1) visual renderings only, (2) numerical performance data only, and (3) a combination of both. Two within‑subjects experiments were conducted with distinct populations—drone hobbyists (experts) and STEM students (novices)—each evaluating the same set of thirty AI‑generated drone concepts. In the visual‑only condition participants saw high‑resolution 3D renderings; in the numerical‑only condition they received a table of key metrics (flight time, payload, energy efficiency, etc.); the mixed condition presented both simultaneously. Participants were asked to select the top five designs within a limited time, and the selected designs were compared against an objective optimality score derived from simulation.

The results overturn the common assumption that visual information aids decision making. Designs evaluated with numerical data alone yielded the highest selection accuracy (≈78 % for experts, ≈73 % for students), outperforming both the mixed condition (≈62 %/58 %) and the visual‑only condition. While visual renderings increased subjective satisfaction, they introduced a bias toward conventional, axis‑symmetric drones, causing participants to overlook higher‑performing but “weird” or asymmetrical configurations. Moreover, an overload effect was observed: as the number of candidate designs exceeded about twenty, accuracy dropped sharply across all conditions.

From these findings the authors derive several practical implications. First, decision‑support interfaces for generative design should prioritize quantitative performance metrics and treat visual renderings as supplemental, not primary, information. Second, visual cues should be presented in a way that does not dominate attention—e.g., using subtle visualizations or side‑by‑side comparisons—while performance data should be clearly visualized (color coding, charts) to reduce cognitive load. Third, designers should be shielded from excessive candidate sets through pre‑filtering, clustering, or ranking mechanisms to mitigate the overload effect.

The paper acknowledges limitations: the study is confined to UAVs, the visual rendering quality (lighting, material) was not systematically varied, and the participant pool, though diverse, remains modest. Future work is suggested to explore other engineering domains, richer multimodal presentations (interactive simulations, animations), and to model individual differences in expertise and visual preference. Overall, the study contributes empirical evidence that, contrary to intuition, numerical performance data alone can lead to more optimal design selections when using AI‑generated concepts, and that visual representations may introduce bias that hampers the discovery of innovative, high‑performing solutions.


Comments & Academic Discussion

Loading comments...

Leave a Comment