Today we design wireless networks using mathematical models that govern communication in different propagation environments. We rely on measurement campaigns to deliver parametrized propagation models, and on the 3GPP standards process to optimize model-based performance, but as wireless networks become more complex this model-based approach is losing ground. Mobile Network Operators (MNOs) are counting on Artificial Intelligence (AI) to transform wireless by increasing spectral efficiency, reducing signaling overhead, and enabling continuous network innovation through software upgrades. They may also be interested in new use cases like integrated sensing and communications (ISAC). All we need is an AI-native physical layer, so why not simply tailor the offline AI algorithms that have revolutionized image and natural language processing to the wireless domain? We argue that these algorithms rely on off-line training that is precluded by the sub-millisecond speeds at which the wireless interference environment changes. We present an alternative architecture, a universal neural receiver based on convolution, which governs transmit and receive signal processing of any signal in any part of the wireless spectrum. Our neural receiver is designed to invert convolution, and we separate the question of which convolution to invert from the actual deconvolution. The neural network that performs deconvolution is very simple, and we configure this network by setting weights based on domain knowledge. By telling our neural network what we know, we avoid extensive offline training. By developing a universal receiver, we hope to simplify discussions about the proper choice of waveform for different use cases in the international standards. Since the receiver architecture is largely independent of technologies introduced at the base station, we hope to increase the rate of innovation in wireless.
"AI and communication" is one of the six key usage scenarios of IMT-2030 [1]. Besides the communication aspect of requiring high area traffic capacity and user-experienced data rates to support distributed computing and AI applications, this usage scenario is also expected to include a set of new capabilities related to the integration of AI and compute functionalities into IMT-2030 as illustrated in the concept of AI-enabled cellular networks [2]. As a critical step, the 3rd Generation Partnership Project (3GPP) has initiated the exploration of AI in the 6G air interface [3]. This trend of standardizing and deploying AI for the air interface is anticipated to continue and evolve through 6G/NextG networks.
The growing interests in this domain mainly arise from the intrinsic issues of network complexity, model deficit, and algorithm deficit as detailed in [2], but tailored towards the air interface of the NextG mobile broadband networks. Specifically, L. Liu and Y. Yi are with Wireless@Virginia Tech, Bradley Department of ECE at Virginia Tech. L. Zheng is with the EECS Department at the Massachusetts Institute of Technology (MIT), and R. Calderbank is with the ECE Department at Duke University.
the air interface of the NextG (e.g., 6G and beyond) is expected to be increasingly sophisticated with complex network topologies/numerologies, non-linear device components, and high-complexity processing algorithms. Therefore, it becomes exceedingly challenging to utilize conventional model-based approaches in a scalable and efficient manner. Meanwhile, AI/ML-based data-driven approaches can effectively resolve these issues, providing an appealing alternative for the design of the NextG air interface.
Most of the introduced AI/ML-based strategies are focusing exclusively on tailoring the offline training that have revolutionized image and natural language processing to the NextG air interface [3]. However, very little success has been reported so far. Why? One fundamental challenge is that 5G/NextG is a global technology, so models based on extensive offline training in New York City may disappoint when deployed in Delhi, or even in Dallas. A second fundamental challenge is machine learning at the Speed of Wireless. Is it even possible to learn in a sub-millisecond transmit time interval (TTI) when the propagation environment is changing very rapidly in the NextG air interface?
In this paper, we develop machine learning methods that are inspired by and based on the traditional model-based approaches to learn at the Speed of Wireless in the NextG air interface. We start from convolution, which governs the transmission and reception of any waveform in any part of the radio spectrum. By starting with a physical process that is common to all modes of wireless communication we are able to develop a universal receiver. Specifically, we present a neural receiver that is designed to invert convolution. We describe how it is able to implement online, real-time learning within each TTI without offline training for several physical layer (PHY) waveforms. The key to achieving computational efficiency is to separate the question of which convolution to invert from the actual deconvolution. The neural network that performs deconvolution is very simple, and we configure this network by setting the weights of the underlying neural network based on domain knowledge. By telling our neural network what we know, we avoid extensive offline training.
The organization of the paper is the following: Section II will introduce the dynamic nature of the radio environments of 5G and NextG in the air interface. The inherent challenges and potential solutions of applying AI/ML in the NextG air interface will be discussed in Section III. The universal neural receiver will be introduced in Section IV with its basic principles, the geometric interpretation for its explainability as well as case studies for weight configuration for both MIMO-OFDM and OTFS. Section V will contain the conclusion and
The radio connections between mobile devices (termed UEs by 3GPP) and base stations (termed gNBs by 3GPP) define the air interface of a mobile broadband cellular network. If AI/ML is to transform NextG networks, then we need to meet the challenge of integrating AI/ML into the air interface, where interference changes on a sub-millisecond time scale.
Figure 1 shows a current 5G/5G-Advanced air interface where 2 gNBs serve 8 UEs, with gNB A serving 5 UEs and gNB B serving the remaining 3 UEs. Data is partitioned into 10ms radio frames for transmission over the radio link between gNB A and UE 1, and each radio frame is further divided into 10 subframes each of duration 1ms. Then, depending on the numerology, each subframe will be further partitioned into 1, 2, 4, 8, or 16 slots with time duration ranging from 1ms to 62.5µs. The transmission time interval (TTI), comprising one or more slots, is the granularity at which the 5G/5G-Advanced air interface as
This content is AI-processed based on open access ArXiv data.