Playing Mastermind With Constant-Size Memory

Reading time: 5 minute
...

📝 Original Info

  • Title: Playing Mastermind With Constant-Size Memory
  • ArXiv ID: 1110.3619
  • Date: 2023-05-18
  • Authors: : Chv’atal, Droste, Jansen, Wegener

📝 Abstract

We analyze the classic board game of Mastermind with $n$ holes and a constant number of colors. A result of Chv\'atal (Combinatorica 3 (1983), 325-329) states that the codebreaker can find the secret code with $\Theta(n / \log n)$ questions. We show that this bound remains valid if the codebreaker may only store a constant number of guesses and answers. In addition to an intrinsic interest in this question, our result also disproves a conjecture of Droste, Jansen, and Wegener (Theory of Computing Systems 39 (2006), 525-544) on the memory-restricted black-box complexity of the OneMax function class.

💡 Deep Analysis

Figure 1

📄 Full Content

The original Mastermind game is a board game for two players invented in the seventies by Meirowitz. It has pegs of six different colors. The goal of the codebreaker, for brevity called Paul here, is to find a color combination made up by codemaker (called Carole in the following). He does so by guessing color combinations and receiving information on how close this guess is to Carole's secret code. Paul's aim is to use as few guesses as possible.

For a more precise description, let us call the colors 1 to 6. Write [n] := {1, . . . , n} for any n ∈ N. Carole’s secret code is a length-4 string of colors, that is, a z ∈ [6] 4 . In each iteration, Paul guesses a string x ∈ [6] 4 and Carole replies with a pair (eq(z, x), π(z, x)) of numbers. The first number, eq(z, x), which is usually indicated via black answer-pegs, is the number of positions, in which Paul’s and Carole’s string coincide. The other number, π(z, x), usually indicated by white answerpegs, is the number of additional pegs having the right color, but being in the wrong position. Formally eq(z, x) := |{i ∈ [4] | z i = x i }| and π(z, x) := max ρ∈S 4 |{i ∈ [4] | z i = x ρ(i) }|eq(z, x), where S 4 denotes the set of all permutations of [4]. Paul “wins” the game if he guesses Carole’s string, that is, if Carole’s answer is (4, 0).

We are interested in strategies for Paul that guarantee him to find the secret code with few questions. We thus adopt a worst-case view with respect to Carole’s secret code. This is equivalent to assuming that Carole may change her hidden string at any time as long as it remains consistent with all previous answers (devil’s strategy).

Previous Results. Mathematics and computer science literature produces a plethora of results on the Mastermind problem. For the original game with 6 colors and 4 positions, Knuth [Knu77] Carole to play a devil’s strategy and if Carole only reveals the number of fully correct pegs eq(x, z) (“black answer-peg version of Mastermind”).

The bound in Theorem 1 is asymptotically tight. A lower bound of Ω(n/ log n) is already true without memory restrictions. This follows easily from an information theoretic argument, cf. [ER63] or [Chv83]. Our result disproves a conjecture of Droste, Jansen, and Wegener [DJW06], who believed that a lower bound of Ω(n log n) should hold for the 2-color black answer-peg Mastermind problem with memory-restriction one.

The proof of Theorem 1 is quite technical. For a clearer presentation of the ideas, we first consider the size-two memory-restricted model, cf. Section 3. The proof of Theorem 1 is given in Section 4. Before going into the proofs, in the following section we sketch the connection between Mastermind games and black-box complexities.

In this section, we describe the connection between the Mastermind game and black-box complexity. The reader only interested in the Mastermind result may skip this section without loss.

Roughly speaking, the black-box complexity of a set of functions is the number of function evaluations needed to find the optimum of an unknown member from that set. Since problemunspecific search heuristics such as randomized hill-climbers, evolutionary algorithms, simulated annealing etc. do optimize by repeatedly generating new search points and evaluating their objective values (“fitness”), the black-box complexity is a lower bound on the efficiency of such generalpurpose heuristics [DJW06].

Black-Box Complexity. Let S be a finite set. A (randomized) algorithm following the scheme of Algorithm 1 is called black-box optimization algorithm for functions S → R.

Algorithm 1: Scheme of a black-box algorithm for optimizing f : S → R 1 Initialization: Sample x (0) according to some probability distribution p (0) on S; 2 Query f (x (0) ); 3 for t = 1, 2, 3, . . . do 4 Depending on (x (0) , f (x (0) )), . . . , (x (t-1) , f (x (t-1) )) choose a probability distribution p (t) on S and sample x (t) according to p (t) ; 5 Query f (x (t) );

For such an algorithm A and a function f : S → R, let T (A, f ) ∈ R ∪ {∞} be the expected number of fitness evaluations until A queries for the first time some x ∈ arg max f . We call T (A, f ) the runtime of A for f . For a class F of functions S → R, the A-black-box complexity of F is T (A, F) := sup f ∈F T (A, f ), the worst-case runtime of A on F. Let A be a class of black-box algorithms for functions S → R. Then the A-black-box complexity of F is T (A, F) := inf A∈A T (A, F). If A is the class of all black-box algorithms, we also call T (A, F) the unrestricted black-box complexity of F.

As said, the unrestricted black-box complexity is a lower bound for the efficiency of randomized search heuristics optimizing F. Unfortunately, often this lower bound is not very useful. For example, Droste, Jansen, and Wegener [DJW06] observe that the N P -complete MaxClique problem on graphs of n vertices has a black-box complexity of only O(n 2 ).

Black-Box Algorithms with Bounded Memory. As a possible solution to th

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut