Mojo: MLIR-Based Performance-Portable HPC Science Kernels on GPUs for the Python Ecosystem
📝 Original Info
- Title: Mojo: MLIR-Based Performance-Portable HPC Science Kernels on GPUs for the Python Ecosystem
- ArXiv ID: 2509.21039
- Date: 2025-09-25
- Authors: 제공된 정보에 저자 명단이 포함되어 있지 않습니다.
📝 Abstract
We explore the performance and portability of the novel Mojo language for scientific computing workloads on GPUs. As the first language based on the LLVM's Multi-Level Intermediate Representation (MLIR) compiler infrastructure, Mojo aims to close performance and productivity gaps by combining Python's interoperability and CUDA-like syntax for compile-time portable GPU programming. We target four scientific workloads: a seven-point stencil (memory-bound), BabelStream (memory-bound), miniBUDE (compute-bound), and Hartree-Fock (compute-bound with atomic operations); and compare their performance against vendor baselines on NVIDIA H100 and AMD MI300A GPUs. We show that Mojo's performance is competitive with CUDA and HIP for memory-bound kernels, whereas gaps exist on AMD GPUs for atomic operations and for fast-math compute-bound kernels on both AMD and NVIDIA GPUs. Although the learning curve and programming requirements are still fairly low-level, Mojo can close significant gaps in the fragmented Python ecosystem in the convergence of scientific computing and AI.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.