On lines and Joints

Reading time: 6 minute
...

📝 Original Info

  • Title: On lines and Joints
  • ArXiv ID: 0906.0558
  • Date: 2009-06-02
  • Authors: Haim Kaplan, Micha Sharir, Eugenii Shustin

📝 Abstract

Let $L$ be a set of $n$ lines in $\reals^d$, for $d\ge 3$. A {\em joint} of $L$ is a point incident to at least $d$ lines of $L$, not all in a common hyperplane. Using a very simple algebraic proof technique, we show that the maximum possible number of joints of $L$ is $\Theta(n^{d/(d-1)})$. For $d=3$, this is a considerable simplification of the orignal algebraic proof of Guth and Katz~\cite{GK}, and of the follow-up simpler proof of Elekes et al. \cite{EKS}.

💡 Deep Analysis

Deep Dive into On lines and Joints.

Let $L$ be a set of $n$ lines in $\reals^d$, for $d\ge 3$. A {\em joint} of $L$ is a point incident to at least $d$ lines of $L$, not all in a common hyperplane. Using a very simple algebraic proof technique, we show that the maximum possible number of joints of $L$ is $\Theta(n^{d/(d-1)})$. For $d=3$, this is a considerable simplification of the orignal algebraic proof of Guth and Katz~\cite{GK}, and of the follow-up simpler proof of Elekes et al. \cite{EKS}.

📄 Full Content

Let L be a set of n lines in R d , for d ≥ 3. A joint of L is a point incident to at least d lines of L, not all in a common hyperplane.

A simple construction, using the axis-parallel lines in a k × k × • • • × k grid, for k = Θ(n 1/(d-1) ), has dk d-1 = Θ(n) lines and k d = Θ(n d/(d-1) ) joints.

In this paper we prove that this is a general upper bound. That is:

Theorem 1 The maximum possible number of joints in a set of n lines in R d is Θ(n d/(d-1) ).

Background. The problem of bounding the number of joints, for the 3-dimensional case, has been around for almost 20 years [3,6,9] (see also [2,Chapter 7.1,Problem 4]), and, until very recently, the best known upper bound, established by Sharir and Feldman [6], was O(n 1.6232 ). The proof techniques were rather complicated, involving a battery of tools from combinatorial geometry, including forbidden subgraphs in extremal graph theory, space decomposition techniques, and some basic results in the geometry of lines in space (e.g., Plücker coordinates).

Wolff [10] observed a connection between the problem of counting joints to the Kakeya problem. Bennett et al. [1] exploited this connection and proved an upper bound on the number of so-called θ-transverse joints in R 3 , namely, joints incident to at least one triple of lines for which the volume of the parallelepiped generated by the three unit vectors along these lines is at least θ. This bound is O(n 3/2+ε /θ 1/2+ε ), for any ε > 0, where the constant of proportionality depends on ε.

It has long been conjectured that the correct upper bound on the number of joints (in three dimensions) is O(n 3/2 ), matching the lower bound just noted. In a rather dramatic recent development, Guth and Katz [8] have settled the conjecture in the affirmative, showing that the number of joints (in three dimensions) is indeed O(n 3/2 ). Their proof technique is completely different, and uses fairly simple tools from algebraic geometry. In a follow-up paper by Elekes et al. [5], the proof has been further simplified, and extended (a) to obtain bounds on the number of incidences between n lines and (some of) their joints, and (b) to handle also flat points, which are points incident to at least three lines, all coplanar.

As far as we know, the problem has not yet been studied for d > 3.

In this paper we give a very simple and short proof of Theorem 1; that is, we obtain a tight bound for the maximum possible number of joints in any dimension. The proof uses an algebraic approach similar to that of the other proofs, but is much simpler, shorter and direct.

We note that this paper does not subsume the previous paper [5], because the new proof technique cannot handle the problem of counting incidences between lines and joints, nor can it handle flat points. Nevertheless, it is our hope that these extensions would also be amenable to similarly simpler proof techniques.

Analysis. We will need the following well-known result from algebraic geometry; see proofs for the 3-dimensional case in [5,8]. We include the easy general proof for the sake of completeness. 1) , for some constant parameter A, depending on d, which we will fix shortly.

Pruning. We first apply the following iterative pruning process to L. As long as there exists a line ℓ ∈ L incident to fewer than m/(2n) points of J, we remove ℓ from L, remove its incident points from J, and repeat this step with respect to the reduced sets of lines and points (keeping the threshold m/(2n) fixed). In this process we delete at most m/2 points. We are thus left with a subset of the original lines, each incident to at least m/(2n) surviving points, and each surviving point is a joint in the set of surviving lines, that is, it is incident to at least d surviving lines, not all in a common hyperplane. For simplicity, continue to denote these sets as L and J.

Vanishing. Applying Proposition 2, we obtain a nontrivial d-variate polynomial p which vanishes at all the (at most) m points of J, whose degree is at most the smallest integer b satisfying b+d d ≥ m + 1, so the degree is at most

We choose A so that the number of points on each (surviving) line is greater than b. That is, we require that m/(2n) > 2(d!m) 1/d , or that m > 2 d+1 d! 1/(d-1) n, which will 1) . (Asymptotically, the right-hand side is only slightly larger than d.)

With this choice of A, the polynomial p vanishes on at least m/(2n) > b points on each line in L. Hence, p vanishes identically on every line of L.

Differentiating. Fix a point a ∈ J, and let ℓ be a line of L incident to a. Parametrize points on ℓ as a + tv, where v is a (unit) vector in the direction of ℓ. We have, for t sufficiently small, p(a

Since p ≡ 0 on ℓ, we must have ∇p(a) • v = 0. This holds for every line of L incident to a. But since a is a joint, the directions of these lines span the entire d-space, so ∇p(a), being orthogonal to all of them, must be the zero vector. That is, all the first-order derivatives of p vanish at a.

Consider o

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut