Eye Contact Between Pedestrians and Drivers

Eye Contact Between Pedestrians and Drivers
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

When asked, a majority of people believe that, as pedestrians, they make eye contact with the driver of an approaching vehicle when making their crossing decisions. This work presents evidence that this widely held belief is false. We do so by showing that, in majority of cases where conflict is possible, pedestrians begin crossing long before they are able to see the driver through the windshield. In other words, we are able to circumvent the very difficult question of whether pedestrians choose to make eye contact with drivers, by showing that whether they think they do or not, they can’t. Specifically, we show that over 90% of people in representative lighting conditions cannot determine the gaze of the driver at 15m and see the driver at all at 30m. This means that, for example, that given the common city speed limit of 25mph, more than 99% of pedestrians would have begun crossing before being able to see either the driver or the driver’s gaze. In other words, from the perspective of the pedestrian, in most situations involving an approaching vehicle, the crossing decision is made by the pedestrian solely based on the kinematics of the vehicle without needing to determine that eye contact was made by explicitly detecting the eyes of the driver.


💡 Research Summary

Background and Motivation
A common belief—reinforced by traffic safety campaigns and driver‑pedestrian interaction literature—is that pedestrians look for eye contact with an approaching driver before deciding to cross. This belief underpins many design recommendations for autonomous vehicles (AVs), which assume that a driver’s gaze is a critical non‑verbal cue. Yet, no large‑scale empirical evidence has quantified whether pedestrians can actually see a driver’s face or eyes at realistic distances and lighting conditions. The authors set out to test this belief directly, with implications for human‑AV interaction design.

Related Work
Prior studies have reported that eye contact can increase driver yielding rates and that pedestrians often “seek” eye contact to confirm they have been seen. However, these works rely on self‑reports, small‑scale Wizard‑of‑Oz experiments, or laboratory simulations that do not isolate visual feasibility. Some recent research suggests vehicle kinematics (speed, deceleration) may be the dominant cue, but no work has systematically measured the visual limits of seeing a driver through a windshield.

Methodology
The authors designed two online experiments using high‑resolution (4K) photographs of a Lincoln MKZ taken from a pedestrian’s perspective. Images covered nine lighting conditions (full sun, sun at zenith, sunset, shadows, glare, tree shadow, night, etc.) and six distances (5 m, 10 m, 15 m, 20 m, 25 m, 30 m). For each distance‑lighting pair three driver states were captured: driver looking forward, driver looking toward the camera (simulating eye contact), and driver absent (empty seat).

Participants were recruited via Amazon Mechanical Turk under strict quality filters (≥1 000 approved HITs, ≥98 % acceptance rate). A total of 360 participants (180 per experiment, balanced gender, ages 19‑74) completed the tasks. Each participant was randomly assigned to one lighting condition and then viewed all six distances in that condition.

Experiment 1 (DRIVER) asked: “Can you see if there is a driver in the car?” with three response options: “No, I can’t see,” “Yes, there is no driver,” and “Yes, there is a driver.”

Experiment 2 (EYES) asked: “Can you see where the driver’s eyes are looking?” with options: “No, I can’t see,” “Yes, they are looking forward,” and “Yes, they are looking at the camera.”

Before the visual tasks, participants answered two survey questions: (Q1) whether they normally try to make eye contact when crossing, and (Q2) whether they believe they can see through a windshield. Q2 was repeated after the experiments to detect belief changes. Response times shorter than 400 ms were excluded as inattentive.

Results

  • Survey: 66 % claimed they try to make eye contact (Q1). 79 % believed they could see a driver through the windshield (Q2) before the experiment; after the tasks this belief dropped to 53 %, with 104 participants changing their answer.
  • Experiment 1: Overall, 71 % of responses were “I can’t see.” When participants claimed they could see, their accuracy was 80 % (i.e., they correctly identified driver presence/absence). The “I can’t see” rate rose sharply with distance: ~30 % at 5 m, ~60 % at 10 m, and >90 % at 25 m–30 m across all lighting conditions.
  • Experiment 2: 87 % of responses were “I can’t see” the driver’s gaze. Accuracy among the “I can see” responses was only 60 %. The inability to discern gaze increased from ~40 % at 5 m to >90 % at 15 m and beyond.
  • Lighting Effects: While night and heavy glare slightly worsened performance, the dominant factor was distance; even under optimal sunlight, participants could not reliably see the driver beyond 10 m.

Interpretation
The authors translate these perceptual limits into a traffic‑safety context. At a typical urban speed limit of 25 mph (≈11 m s⁻¹), a vehicle 30 m away reaches the pedestrian in about 2.7 seconds—roughly the minimum time most pedestrians need to decide to cross. Consequently, the decision is made before the vehicle is close enough for the driver’s face or eyes to be visible. The data therefore refute the notion that eye contact is a primary cue in unsignalized crossing situations.

Implications for Autonomous Vehicles
Since pedestrians rely on kinematic cues (speed, deceleration) rather than visual confirmation of a driver’s gaze, AVs can safely interact with pedestrians without a human driver present. Designing AV control policies that produce clear, predictable acceleration/deceleration profiles may be more effective than attempting to simulate eye contact (e.g., via external displays). Moreover, assuming that pedestrians expect eye contact could lead to over‑engineered human‑machine interfaces that add unnecessary complexity and cost.

Limitations and Future Work
The study uses static 2‑D images, which cannot capture motion parallax, dynamic lighting changes, or peripheral visual cues present in real‑world crossing. Mechanical Turk participants viewed images on computer screens, not through a real windshield, so ecological validity is limited. Future research should employ virtual‑reality or augmented‑reality setups, on‑road field trials, and eye‑tracking to measure actual gaze behavior. Additionally, investigating how attention, distraction, and individual differences (e.g., visual acuity, age) modulate the ability to detect drivers would deepen the model of pedestrian decision‑making.

Conclusion
Across a broad set of realistic lighting and distance conditions, more than 90 % of participants could not determine a driver’s gaze at 15 m, and over 99 % could not see the driver at all at 30 m. This demonstrates that the widely held belief in pedestrian‑driver eye contact is largely a post‑hoc rationalization; in practice, pedestrians make crossing decisions based on vehicle kinematics alone. The findings provide strong empirical support for AV design strategies that prioritize clear motion cues over simulated eye contact, and they highlight the importance of grounding human‑AV interaction models in measurable perceptual limits rather than anecdotal assumptions.


Comments & Academic Discussion

Loading comments...

Leave a Comment