Metasurface-encoded optical neural network wavefront sensing for high-speed adaptive optics

Reading time: 5 minute
...

📝 Original Info

  • Title: Metasurface-encoded optical neural network wavefront sensing for high-speed adaptive optics
  • ArXiv ID: 2602.16535
  • Date: 2026-02-18
  • Authors: ** 저자 정보가 논문 본문에 명시되지 않아 확인할 수 없습니다. **

📝 Abstract

Free-space optical communications with moving targets, such as satellite terminals, demand ultrafast wavefront sensing and correction. This is typically addressed using a Shack-Hartmann sensor, which pairs a high-speed camera with a lenslet array, but such systems add significant cost, weight, and power demands. In this work, we present a hybrid opto-electric neural network (OENN) wavefront sensor that enables ultra-high-speed operation in a compact, low-cost system. Subwavelength diffractive metasurfaces efficiently encode the incoming wavefront into tailored irradiance patterns, which are then decoded by a lightweight multilayer perceptron (MLP). In simulation and experiment, the hybrid approach achieves average Strehl ratio (SR) improvements exceeding 60% and 45%, respectively, for unseen wavefronts compared to purely electronic sensors with few-pixel inputs. Although larger MLPs allow purely electronic sensors to match the hybrid's SR under static conditions, transient atmosphere modeling shows that their added latency leads to rapid SR degradation with increasing Greenwood frequency, while the hybrid system maintains performance. These results highlight the potential of hybrid OENN architectures to unlock scalable, high-bandwidth free-space communication systems and, more broadly, to advance optical technologies where real-time sensing is constrained by electronic latency.

💡 Deep Analysis

📄 Full Content

The rapid growth of data-intensive services over the past decade has driven an ever-increasing demand for higher bandwidth capacity. Optical fiber communication has risen to meet much of this demand, but it remains constrained by costly infrastructure and limited coverage. Satellite constellations with free-space optical links offer a compelling alternative, enabling global internet access without the burden of terrestrial infrastructure. Yet, while inter-satellite laser links now operate effectively, the satellite-to-ground segment still relies primarily on RF transmission. Shifting to bidirectional laser downlinks would unlock higher carrier frequencies, multi-gigabit data rates, spectrum license-free operation, and enhanced security. However, a main barrier to implementation is atmospheric turbulence, which distorts the optical wavefront, lowering coupling efficiency (CE) and raising bit-error rates (BER) [1][2][3][4] .

Adaptive optics (AO) systems have long been used to correct turbulence effects in free-space optical communication links 3,5 . A typical AO system combines a wavefront sensor to measure distortions with a deformable mirror to apply corrective phases. For stable operation, the wavefront sensor must run at bandwidths multiple times faster than the Greenwood frequency (𝑓 𝐺 ), which scales with wind velocity. In satellite downlinks, effective wind velocity due to high slew rate during tracking can drive Greenwood frequencies up to ~1 kHz [6][7][8][9] . While Shack-Hartmann wavefront sensors (SHWS) can achieve temporal bandwidths near 40 kHz using lenslet arrays paired with high-speed Phantom cameras 10 , these implementations are bulky, power-hungry, and costly. Real-time wavefront reconstruction from a SHWS also depends on slow, computationally intensive algorithms that often require specialized hardware like FPGAs or GPUs 11,12 .

Optical neural networks (ONNs) shift the computation to the optical domain offering a means of reducing latency by eliminating electronic reconstruction altogether. Implemented as cascades of passive diffractive layers, ONNs map distorted wavefronts directly to irradiance patterns that can be captured by a few highspeed photodiode pixels, encoding aberration coefficients at the speed of light. This enables compact, ultrafast, low-power sensing without high-end electronics. While promising, ONNs require strict alignment tolerances (<λ/2) making their implementation at free-space optical communication wavelengths challenging [13][14][15][16] . These challenges can be overcome with photonic encoders that pair simpler (even single layer) optical front ends with lightweight artificial neural network (ANN) back ends for low-latency reconstruction. Demonstrations range from photonic lanterns paired with neural networks for Zernike mode recovery to hybrid optical-electronic classifiers for image datasets like MNIST 15,16 . While this encoderbased paradigm presents a promising way of balancing the strengths and weaknesses of optical computing and artificial intelligence, a detailed study of its efficacy for a complex problem such as wavefront sensing in atmospheric turbulence remains to be done.

In this work, we present an implementation of a hybrid opto-electric neural network (OENN) system for high-speed wavefront sensing in atmospheric turbulence. Our design employs two metasurfaces: a compact phase-diversity encoder that splits the incoming beam into two focal points with different defocus, and a diffractive ONN layer that converts each focused beam into encoded diffraction patterns. These irradiance patterns are then decoded by a lightweight multilayer perceptron (MLP) to generate the parameters required to drive a deformable mirror for correction. We developed an end-to-end optimization pipeline that jointly optimizes the metasurface encoder and MLP encoder, while incorporating the practical limitations of an actuated deformable mirror. We assessed the benefit of the hybrid system by comparing performance against an MLP-only wavefront sensor in metrics such as corrected Strehl ratio (SR) and computational latency. Across simulation and experiment, the hybrid sensor achieves average Strehl ratio improvements exceeding 60% and 45%, respectively, on unseen wavefronts relative to electronic-only systems with low-dimensional inputs. We also show that the hybrid system maintains high correction capability under transient atmospheric effects beyond 150 Hz, whereas the purely electronic systems with higher latency degrade rapidly and provide almost no improvement.

Our wavefront sensor (WS) design, shown schematically in Figure 1, is divided into two main components: the optical encoder and the backend digital decoder. These components work together to drive a DM used to correct incoming wavefront aberrations. The optical encoder itself is composed of two elements: a phasediversity focusing metasurface (FMS) and a diffractive encoder metasurface which we denote as the sing

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut