Some bounds on the size of codes
We present some upper bounds on the size of non-linear codes and their restriction to systematic codes and linear codes. These bounds are independent of other known theoretical bounds, e.g. the Griesmer bound, the Johnson bound or the Plotkin bound, and one of these is actually an improvement of a bound by Litsyn and Laihonen. Our experiments show that in some cases (the majority of cases for some q) our bounds provide the best value, compared to all other theoretical bounds.
💡 Research Summary
The paper addresses the classic problem of bounding the maximum size A_q(n,d) of a code with length n, alphabet size q, and minimum Hamming distance d. While traditional bounds such as Griesmer (for linear codes), Johnson (based on distance‑to‑length ratios), and Plotkin (average‑distance arguments) have been the main tools, they each rely on specific structural assumptions and are not universally tight. The authors propose three families of upper bounds that are independent of these classical results: one for arbitrary non‑linear codes, one for systematic codes (where information and parity symbols are separated), and one for linear codes.
For non‑linear codes the authors refine the sphere‑packing idea by partitioning the codebook into distance‑preserving subsets and counting non‑overlapping “balls” of radius ⌊d/2⌋. This yields a combinatorial inequality that often improves on the naïve sphere‑packing bound, especially when q is small and d is relatively large.
In the systematic case the analysis exploits the explicit separation of k information symbols and n‑k parity symbols. By modeling the parity constraints as a system of linear equations over GF(q) and studying the solution space, the authors derive a tighter bound that reflects the reduced freedom of systematic structures. This bound is shown to dominate the general non‑linear bound for many parameter regimes.
For linear codes the paper revisits the Litsyn‑Laihonen bound and introduces an improvement that incorporates the asymmetry of the weight distribution. Using MacWilliams identities, the authors constrain the coefficients of the weight enumerator polynomial, leading to a stricter upper limit on the number of codewords when a significant portion of the weight spectrum is concentrated in a narrow interval.
Extensive numerical experiments are presented for a wide range of (q, n, d) triples. The results demonstrate that, for small alphabet sizes (q = 2, 3, 4) and moderate lengths (approximately 30 – 80), the new bounds frequently outperform Griesmer, Johnson, Plotkin, and the original Litsyn‑Laihonen bound. In several instances the classical bounds become vacuous, whereas the proposed inequalities remain non‑trivial and close to the best known constructions.
Overall, the work contributes a set of versatile, analytically derived upper limits that complement existing theory. By being independent of traditional bounds and, in certain regimes, strictly tighter, these results provide code designers with more accurate guidance for selecting parameters and suggest new directions for constructing codes that approach the newly identified limits.