Merge and Conquer: Evolutionarily Optimizing AI for 2048
Reading time: 2 minute
...
📝 Original Info
- Title: Merge and Conquer: Evolutionarily Optimizing AI for 2048
- ArXiv ID: 2510.20205
- Date: 2025-10-23
- Authors: 정보 없음 (논문에 저자 정보가 제공되지 않음)
📝 Abstract
Optimizing artificial intelligence (AI) for dynamic environments remains a fundamental challenge in machine learning research. In this paper, we examine evolutionary training methods for optimizing AI to solve the game 2048, a 2D sliding puzzle. 2048, with its mix of strategic gameplay and stochastic elements, presents an ideal playground for studying decision-making, long-term planning, and dynamic adaptation. We implemented two distinct systems: a two-agent metaprompting system where a "thinker" large language model (LLM) agent refines gameplay strategies for an "executor" LLM agent, and a single-agent system based on refining a value function for a limited Monte Carlo Tree Search. We also experimented with rollback features to avoid performance degradation. Our results demonstrate the potential of evolutionary refinement techniques in improving AI performance in non-deterministic environments. The single-agent system achieved substantial improvements, with an average increase of 473.2 points per cycle, and with clear upward trends (correlation $ρ$=0.607) across training cycles. The LLM's understanding of the game grew as well, shown in its development of increasingly advanced strategies. Conversely, the two-agent system did not garner much improvement, highlighting the inherent limits of meta-prompting.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.