The (strong) Bruhat order for permutations provides a partial ordering defined as follows: two permutations are comparable if one can be obtained from the other by a sequence of adjacent transpositions that each increase the number of inversions by $1$. Given two random permutations, what is the probability that they are comparable in the Bruhat order? This problem was first considered in a 2006 work of Hammett and Pittel, which showed an exponential lower bound and a polynomial upper bound. The lower bound was very recently improved to the subexponential bound of $\exp(-n^{1/2 + o(1)})$ by Boretsky, Cornejo, Hodges, Horn, Lesnevich, and McAllister. Hammett and Pittel predicted that the probability should decrease polynomially. We show that the probability decreases faster than any polynomial and is on the order of $\exp(-Θ(\log^2 n))$.
The (strong) Bruhat order is a partial order in the symmetric group: two permutations are comparable if one can be obtained from the other by a sequence of adjacent transpositions that increase the number of inversions by 1. An easy-to-check criterion for comparability is the so-called (0, 1)-matrix criterion: given permutations π and τ , let M π and M τ be the corresponding permutation matrices. Then the Bruhat order ≤ is defined via [2, Thm. 2.1 .5] (1)
π ≤ τ ⇐⇒ i≤a, j≤b M τ (i, j) ≤ i≤a, j≤b M π (i, j) for all a, b ≤ n .
The Bruhat order is a central object in algebraic combinatorics and plays a leading role in the study of Schubert varieties. This line of work began with Ehresmann’s 1934 work [6] which included a characterization of the Bruhat order in terms of the increasing rearrangements of π and τ that is equivalent to (1). There are many further equivalent descriptions of the Bruhat order [1,4,5,8,12] including criteria of a similar combinatorial flavor to (1) and of a more algebraic flavor.
In this note, we study the probability that two random permutations are comparable. In particular, if we take π and τ to be independent and uniformly chosen from the symmetric group S n , what is the probability that π ≤ τ ? This problem was introduced in a 2006 work of Hammett and Pittel [9] which showed c(.708) n ≤ P(π ≤ τ ) ≤ Cn -2 for constants C, c > 0. Very recently 1 , Boretsky-Cornejo-Hodges-Horn-Lesnevich-McAllister [3] improved the lower bound to the form exp(-c √ n log 3/2 n). In their work, Hammett and Pittel stated that “Empirical estimates…suggest that P(π ≤ τ ) is of order n -(2+δ) for δ close to 0.5.” Our main theorem shows that in fact the probability decreases faster than any polynomial at the rate exp(-Θ(log 2 n)):
Theorem 1. Let π, τ ∈ S n be independently and uniformly chosen at random. There are constants C, c > 0 so that for n sufficiently large we have exp -C log 2 n ≤ P(π ≤ τ ) ≤ exp -c log 2 n . 1 The work [3] appeared on arXiv while the present work was nearing completion. The main focus of the work [3] is showing that if one considers a different order on Sn known as the weak Bruhat order ≤ W then P(π ≤ W τ ) = exp(-(1/2 + o(1))n log n).
The main idea is to define Z(a, b) = i≤a, j≤b (M π (i, j) -M τ (i, j))
then (1) may be rewritten as
(2) Z(a, b) ≥ 0 for all a, b ≤ n .
We treat (2) as a two-dimensional persistence event. As a useful point of comparison, a classical persistence event is asking for the probability that a simple random walk is non-negative for the first n steps. A simple random walk has only one time index, while (2) has two, and so there is not as simple of a combinatorial approach as in the case of a simple random walk. However, two-dimensional persistence problems for Gaussian processes-such as the Brownian sheet-have received a fair amount of attention. Our approaches to the upper and lower bounds are inspired by these methods from probability theory, and in fact our proof of the upper bound uses some of the machinery for analyzing Gaussian processes to bound our problem (2) in terms of an analogous Gaussian problem.
We prove the upper and lower bounds separately in Section 3 and Section 4 as Theorem 6 and Theorem 13 respectively. We begin with a sketch of our arguments along with a more detailed conjecture about the behavior of P(π ≤ τ ) (see ( 6)).
1.1. Proof outline. To first see heuristically that exp(-Θ(log 2 n)) is the correct probability for the event in (2), note that Z(a, b) is a mean-zero random variable with variance ≈ ab/n. Further, while Z(a, b) is not strictly a sum of independent random variables, one may show that Z(a, b) obeys a central limit theorem and so is close to a Gaussian random variable of variance ≈ ab/n. As a consequence, P(Z(a, b) ≥ 0) ≈ 1/2 for each fixed (a, b) (once ab/n ≫ 1). When (a 1 , b 1 ) and (a 2 , b 2 ) are close together, the random variables Z(a 1 , b 1 ) and Z(a 2 , b 2 ) are quite correlated. The idea is to find a set of (a, b) so that all pairwise correlations are uniformly bounded away from 1. In particular, if we consider S 0 = {ρ i : i ∈ {0, 1, 2, . . .}} ∩ [n] for some fixed integer ρ ≥ 2, then for all pairs (a 1 , b 1 ), (a 2 , b 2 ) ∈ S 2 0 with (a 1 , b 1 ) ̸ = (a 2 , b 2 ) we have
for some δ > 0 depending on ρ. Here one might imagine that across all (a, b) ∈ S 2 0 , the random variables (Z(a, b)) are sufficiently independent so that (4)
While such a strong approximate independent statement does not precisely hold, this heuristic provides the correct shape of the proof of both the upper bound and lower bound, whose technical details are quite different.
For the upper bound, it turns out that if (Z(a, b)) (a,b)∈S 2 0 is a Gaussian vector, then an assumption similar to (3) does imply an upper bound similar to (4), with a loss that is exponential in |S 0 | 2 . This is due to a theorem of Li and Shao [13] which we restate in Theorem 12. However, it is a highly nontrivial task to approximate (Z(a, b))
This content is AI-processed based on open access ArXiv data.