Resource Allocation for STAR-RIS-enhanced Metaverse Systems with Augmented Reality
Augmented reality (AR)-enabled Metaverse is a promising technique to provide immersive service experience for mobile users. However, the limited network resources and unpredictable wireless propagation environments are key design bottlenecks of AR-enabled Metaverse systems. Therefore, this paper presents a resource management framework for simultaneously transmitting and reflecting RIS (STAR-RIS)-assisted AR-enabled Metaverse, where the STAR-RIS is configured to improve the communication efficiency between AR users and the Metaverse server located at the base station (BS). Moreover, we formulate a service latency minimization problem via jointly optimizing the computation resource allocation of the BS, coefficient matrix of the STAR-RIS, central processing unit (CPU) frequency and transmit power of the AR users. To tackle the non-convex problem, we utilize an approximate method to transform it to a tractable form, and decouple the multi-dimensional variables via the alternating optimization method. Particularly, the optimal coefficient matrix is obtained by a penalty function-based method with proved convergence, the CPU frequencies of AR users are derived as the closed-form solution, and the transmit power of AR users and computation resource allocation of the BS are obtained by the Lagrange duality method and convex optimization theory. Finally, simulation results demonstrates that the proposed method achieves remarkable latency reduction than several benchmark methods.
💡 Research Summary
**
The paper tackles the latency‑critical problem of delivering augmented‑reality (AR) services in a Metaverse environment by exploiting a simultaneous transmitting‑and‑reflecting reconfigurable intelligent surface (STAR‑RIS). A typical system consists of a base station (BS) equipped with an edge Metaverse server, a STAR‑RIS with N passive elements, and K mobile AR users. Each user first performs a lightweight local image‑format conversion (YUV→RGB) using its CPU, then uploads the processed video to the BS through the STAR‑RIS, where edge computing performs object detection and returns the result. The total service latency for user k is modeled as the sum of four components: local computation delay, uplink transmission delay, edge‑computing delay, and downlink result delivery delay.
The authors formulate a min‑max latency optimization problem: minimize the maximum latency among all users by jointly optimizing (i) the BS’s computation resource allocation, (ii) the STAR‑RIS transmit/reflect coefficient matrices, (iii) each user’s CPU‑cycle frequency, and (iv) each user’s transmit power and bandwidth share. Constraints include per‑user power limits, total BS CPU capacity, total bandwidth, and the energy‑conservation condition for each RIS element (|γ_t,n|²+|γ_r,n|²=1).
Because the objective and constraints involve products of variables (e.g., SNR depends on both RIS coefficients and transmit power) and non‑linear power‑frequency relations, the problem is non‑convex. The authors first apply a first‑order Taylor approximation to obtain a tractable convex surrogate of the original problem. Then they adopt an alternating optimization (AO) framework that iteratively updates each variable block while keeping the others fixed:
- STAR‑RIS coefficient design – A penalty‑function method relaxes the unit‑modulus constraint, leading to a gradient‑based update with provable convergence as the penalty weight grows.
- User CPU‑frequency selection – By constructing the Lagrangian and applying KKT conditions, a closed‑form expression for the optimal local CPU frequency is derived.
- Transmit power and bandwidth allocation – The Lagrange dual problem yields analytical updates for power and bandwidth shares, solved efficiently with standard convex solvers.
- BS computation resource distribution – Similar to the CPU‑frequency step, a closed‑form solution is obtained from the dual variables.
Complexity analysis shows each sub‑problem can be solved in polynomial time, and the overall AO algorithm converges within a modest number of iterations (≈12–15 in simulations).
Simulation settings emulate realistic 5G parameters: N = 64–256 RIS elements, K = 4–8 users, total bandwidth B = 20 MHz, maximum user transmit power 23 dBm, and typical channel models with perfect CSI. Benchmarks include (i) a reflecting‑only RIS with static resource allocation, (ii) STAR‑RIS with random resource allocation, and (iii) STAR‑RIS without joint optimization. Results demonstrate that the proposed joint design reduces average latency by 30 %–45 % and the worst‑case (max) latency by over 35 % compared with the baselines. The gains are especially pronounced when communication resources are scarce, confirming the advantage of the simultaneous transmit/reflect capability of STAR‑RIS.
Key contributions are:
- Introduction of a service‑latency‑centric optimization framework for AR‑enabled Metaverse that integrates STAR‑RIS configuration.
- Development of a tractable AO algorithm with closed‑form updates for CPU frequencies and BS computation allocation, and a penalty‑based method for RIS coefficients.
- Extensive numerical validation showing substantial latency reductions over existing RIS‑assisted MEC schemes.
The paper assumes perfect channel state information and neglects the power consumption and hardware cost of the STAR‑RIS, which may affect practical deployment. Downlink transmission is also omitted, limiting the analysis to uplink‑dominant scenarios. Future work should consider imperfect CSI, multi‑RIS cooperation, energy‑aware RIS modeling, and bidirectional traffic to fully exploit STAR‑RIS in immersive Metaverse services.
Comments & Academic Discussion
Loading comments...
Leave a Comment