ShowUI-$π$ The Dexterous Hand of GUIs

Reading time: 2 minute
...

📝 Original Paper Info

- Title: ShowUI-$π$ Flow-based Generative Models as GUI Dexterous Hands
- ArXiv ID: 2512.24965
- Date: 2025-12-31
- Authors: Siyuan Hu, Kevin Qinghong Lin, Mike Zheng Shou

📝 Abstract

Building intelligent agents capable of dexterous manipulation is essential for achieving human-like automation in both robotics and digital environments. However, existing GUI agents rely on discrete click predictions (x,y), which prohibits free-form, closed-loop trajectories (e.g. dragging a progress bar) that require continuous, on-the-fly perception and adjustment. In this work, we develop ShowUI-$π$, the first flow-based generative model as GUI dexterous hand, featuring the following designs: (i) Unified Discrete-Continuous Actions, integrating discrete clicks and continuous drags within a shared model, enabling flexible adaptation across diverse interaction modes; (ii) Flow-based Action Generation for drag modeling, which predicts incremental cursor adjustments from continuous visual observations via a lightweight action expert, ensuring smooth and stable trajectories; (iii) Drag Training data and Benchmark, where we manually collect and synthesize 20K drag trajectories across five domains (e.g. PowerPoint, Adobe Premiere Pro), and introduce ScreenDrag, a benchmark with comprehensive online and offline evaluation protocols for assessing GUI agents' drag capabilities. Our experiments show that proprietary GUI agents still struggle on ScreenDrag (e.g. Operator scores 13.27, and the best Gemini-2.5-CUA reaches 22.18). In contrast, ShowUI-$π$ achieves 26.98 with only 450M parameters, underscoring both the difficulty of the task and the effectiveness of our approach. We hope this work advances GUI agents toward human-like dexterous control in digital world. The code is available at https://github.com/showlab/showui-pi.

💡 Summary & Analysis

1. **Importance of Model Selection**: Choosing the right deep learning model for image recognition is crucial. 2. **High Accuracy and Cost with ResNet**: ResNet showed the highest accuracy across all datasets but came with a higher computational cost. 3. **Efficiency of MobileNet**: MobileNet was more efficient in terms of computation time while maintaining good accuracy levels.

📄 Full Paper Content (ArXiv Source)

1. **Importance of Model Selection**: Choosing the right deep learning model for image recognition is crucial. 2. **High Accuracy and Cost with ResNet**: ResNet showed the highest accuracy across all datasets but came with a higher computational cost. 3. **Efficiency of MobileNet**: MobileNet was more efficient in terms of computation time while maintaining good accuracy levels.

📊 논문 시각자료 (Figures)

Figure 1



Figure 2



Figure 3



Figure 4



Figure 5



Figure 6



Figure 7



Figure 8



Figure 9



Figure 10



Figure 11



Figure 12



Figure 13



Figure 14



Figure 15



Figure 16



Figure 17



Figure 18



Figure 19



Figure 20



Figure 21



Figure 22



Figure 23



Figure 24



Figure 25



Figure 26



Figure 27



Figure 28



Figure 29



Figure 30



Figure 31



Figure 32



Figure 33



Figure 34



Figure 35



Figure 36



Figure 37



Figure 38



Figure 39



Figure 40



Figure 41



Figure 42



Figure 43



Figure 44



Figure 45



Figure 46



Figure 47



Figure 48



Figure 49



Figure 50



Figure 51



Figure 52



Figure 53



Figure 54



Figure 55



Figure 56



Figure 57



A Note of Gratitude

The copyright of this content belongs to the respective researchers. We deeply appreciate their hard work and contribution to the advancement of human civilization.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut