On the Fundamental Limits of Hierarchical Secure Aggregation with Dropout and Collusion Resilience

We study the fundamental communication limits of information-theoretic secure aggregation in a hierarchical network consisting of a server, multiple relays, and multiple users per relay. Communication proceeds over two rounds and two hops, and the sy…

Authors: Zhou Li, Yizhou Zhao, Xiang Zhang

On the Fundamental Limits of Hierarchical Secure Aggregation with Dropout and Collusion Resilience
1 On the Fundamental Limits of Hierarchical Secure Aggre gation with Dropout and Collusion Resilience Zhou Li, Member , IEEE , Y izhou Zhao, Member , IEEE , Xiang Zhang, Member , IEEE , and Giuseppe Caire, F ellow , IEEE Abstract W e study the fundamental communication limits of information-theoretic secure aggregation in a hierarchical network consisting of a server , multiple relays, and multiple users per relay . Communication proceeds over two rounds and two hops, and the system is subject to arbitrary user and relay dropouts. Up to T users may collude with either the server or an y single relay . The server aims to recov er the sum of the inputs of all users that surviv e the first round, while learning no additional information beyond the aggregate sum and the inputs of the colluding users. Each relay , howe ver , must learn nothing about the users’ inputs except for the information re vealed by the inputs of the colluding users under the same collusion model. W e introduce a four-dimensional rate tuple that captures the communication cost across rounds and hops. Under a delayed message av ailability model, we establish necessary and suf ficient conditions for feasibility and fully characterize the optimal first-round communication rates. For the second round, we characterize the optimal user-to-relay rate and deriv e lower and upper bounds on the relay-to-server rate. While these bounds do not coincide in general, they are tight in certain regimes of interest. Our results reveal a sharp threshold phenomenon: secure aggregation is feasible if and only if the total number of surviving users across surviving relays exceeds the collusion threshold. Achie vability is established via a vector linear coding scheme with carefully structured correlated randomness exhibiting MDS-lik e properties, ensuring correctness and information-theoretic security under all possible dropout patterns. Entropic con verse bounds are also deriv ed. Index T erms Secure aggreg ation, hierarchical networks, collusion, dropout, security Z. Li is with the Guangxi Ke y Laboratory of Multimedia Communications and Network T echnology , Guangxi Univ ersity , Nanning 530004, China (e-mail: lizhou@gxu.edu.cn). Y . Zhao is with the College of Electronic and Information Engineering, Southwest Univ ersity , Chongqing, China (e-mail: onezhou@swu.edu.cn). X. Zhang and G. Caire are with the Department of Electrical Engineering and Computer Science, T echnical Univ ersity of Berlin, 10623 Berlin, Germany (e-mail: { xiang.zhang, caire } @tu-berlin.de). Corr esponding author: Xiang Zhang. 2 I . I N T RO D U C T I O N The rapid growth of distributed data sources has enabled large-scale collaborativ e learning and computation paradigms, such as federated learning [1]–[5]. In such systems, a central server aggregates information from a large number of users without directly accessing their indi vidual data. A fundamental problem arising in this context is secur e aggr e gation [6], [7]. The information-theoretic formulation of this problem was first studied by Zhao and Sun [8], where the server computes the sum of the users’ inputs while learning no additional information about the individual contributions. In many practical deployments, communication between users and the server is not direct. Instead, hierarchical architectures are commonly adopted, where intermediate nodes such as relays or edge servers assist in collecting and forwarding information [9]–[11]. In such settings, secure aggregation must be performed ov er a hierarchical network, gi ving rise to the problem of hierar chical secur e aggr e gation (HSA) [11]–[16]. This user-relay-server structure naturally arises in large-scale wireless and edge-assisted systems, where multi-hop communication is common. The introduction of a relay layer fundamentally changes the information flo w: messages are processed and forwarded across multiple hops, imposing additional structural constraints on encoding and decoding operations. Secure aggregation protocols must also address two additional practical challenges. First, robustness to user dr opouts [17]–[21]: due to unreliable links, device failures, or limited energy , some users may fail to transmit their messages, and their identities are unknown a priori. Second, security under collusion [22]–[26]: the server or a subset of users may collude to infer pri vate information. In hierarchical systems, these challenges are compounded across multiple hops and rounds, creating potential additional a venues for information leakage. A secure aggreg ation protocol must thus guarantee correct input sum reco very and information-theoretic security under arbitrary dropout and collusion patterns. Moti vated by the above considerations, we study a minimal yet representati ve two-r ound communication model for hierarchical secure aggre gation with user and relay dropouts. The system consists of U relays, each associated with V users, resulting in a total of U V users. Each User ( u, v ) holds an input W u,v and an independent offline- generated secret key Z u,v . Communication occurs through the relay layer over two rounds. In the first round, each User ( u, v ) sends a message to the associated Relay u . Let V (1) u ⊆ [ V ] denote the surviving users under Relay u , and U (1) ⊆ [ U ] the survi ving relays. The serv er observes the surviving user set S u ∈U (1) V (1) u . In the second round, the surviving users from the first round may also drop out. Let V (2) u ⊆ V (1) u and U (2) ⊆ U (1) denote the surviving users and relays in the second round. T o enable correct recov ery , the survi ving users in the second round will send additional messages via their associated relays. After two rounds of communications, the server aims to recover the sum of the inputs of the surviving users in the first round, i.e., P ( u,v ) ∈ S u ∈U (1) V (1) u W u,v . Secure aggregation in this model must satisfy both the correctness and security constraints as follows. Correctness: For all possible surviving sets, the server can correctly recov er the desired sum using the receiv ed messages. 3 Security: • Server security: Ev en if the server colludes with up to T users, it cannot infer any additional information about the input set beyond the colluding users’ inputs and the input sum. • Relay security: Any Relay u , colluding with up to T users, cannot infer additional information about the entire input set beyond the inputs of those users. The goal of this work is to characterize the optimal communication rates over the two rounds and two network hops subject to correctness and security constraints. Specifically , we determine the minimum number of symbols that must be transmitted over each round and hop to securely compute one symbol of the sum. W e establish a threshold phenomenon: if the number of surviving users does not exceed the collusion threshold, secure aggregation is infeasible. Otherwise, we provide matching lo wer and upper bounds for the first round and for the first hop in the second round, and deriv e gap bounds for the second hop in the second round. Prior works ha ve e xplored extensions of secure aggre gation [12], [13], [15], [27]–[31]. Ho wev er , a sharp information-theoretic characterization of hierarchical secure aggregation with simultaneous dropouts and dual security remains an open problem. Our achiev ability schemes lev erage classical secure multiparty computation tech- niques, including linear secret sharing and masking [32]–[35], adapted to the hierarchical and dropout-constrained setting. Con verse results rely on Shannon’ s secrecy frame work [36] and extend entropy-based impossibility tech- niques from priv ate information retrie val [37]–[43]. Compared to existing secure aggre gation protocols [44]–[59], our w ork dif fers in three key aspects: (i) information- theoretic perfect security guarantees, rather than relying on computational assumptions; (ii) explicit hierarchical multi-hop modeling with user and relay dropouts; and (iii) characterization of the fundamental communication limits through matching achiev ability and conv erse bounds. A. Summary of Contributions The main contributions of this paper are summarized as follows: • W e propose a two-round hierarchical secure aggreg ation model that simultaneously captures user and relay dropouts and dual information-theoretic security against collusion with up to T users by either the server or any single relay . This model introduces a four-dimensional rate tuple that quantifies the communication cost across rounds and hops in hierarchical networks. • Under a delayed message av ailability model, we establish necessary and suf ficient feasibility conditions and fully characterize the optimal first-r ound communication rates under the correctness and information-theoretic security constraints. • For the second r ound , we characterize the optimal user-to-relay communication rate and deriv e information- theoretic lower and upper bounds on the optimal relay-to-server rate. Although these bounds do not coincide in general, they are tight in several regimes of interest, yielding a near-complete characterization of the optimal 4 communication cost. • Our results unco ver a sharp threshold phenomenon: secure aggre gation is feasible if and only if the total number of survi ving users across survi ving relays exceeds the collusion threshold T . Achiev ability is achie ved via vector linear coding with structured and correlated randomness for secret ke y generation, while the con verse follo ws from entropic secrecy arguments. I I . P RO B L E M S TA T E M E N T Consider secure aggregation in a three-layer hierarchical network consisting of an aggregation server , an interme- diate layer with U ≥ 2 relays, and a bottom layer of U V users, where each relay serves a disjoint cluster of V users. The network operates ov er two hops: the server communicates with all relays, and each relay communicates with its associated users (see Fig. 1 for an example with U = 2 and V = 2 ). All communication links are assumed to be Fig. 1: Example of robust secure aggregation with U = 2 relays and U V = 4 users. In Round 1, User (1 , 2) drops out. During the signaling phase, the surviving relays report their surviving-user sets V (1) to the server . The server then determines the first-round surviving-user set S (1) and broadcasts it back to the survi ving users via the surviving relays. This signaling phase is necessary because users must know the identities of the surviving users in the first round in order to generate subsequent messages. In Round 2, User (2 , 1) drops out. The server aims to securely compute W 1 , 1 + W 2 , 1 + W 2 , 2 . error-free. The v th user associated with the u th relay is labeled as ( u, v ) ∈ [ U ] × [ V ] . Let M u ≜ { ( u, v ) : v ∈ [ V ] } denote the user cluster serv ed by Relay u . Each User ( u, v ) possesses an input W u,v ∈ F L q , such as local gradients or model parameters in federated learning. The user inputs W [ U ] × [ V ] ≜ { W u,v } ( u,v ) ∈ [ U ] × [ V ] are assumed to be uniformly distributed and mutually independent 1 . T o protect the inputs, each User ( u, v ) is also equipped with a secret key Z u,v of entropy H ( Z u,v ) = L Z . The individual ke ys Z [ U ] × [ V ] ≜ { Z u,v } ( u,v ) ∈ [ U ] × [ V ] are generated from 1 The assumptions of input uniformity and independence are only used to establish the converse bounds. The proposed secure aggre gation schemes ensures security for arbitrarily distributed and correlated inputs. 5 a sour ce ke y Z Σ ∈ F L Z Σ q such that H ( Z [ U ] × [ V ] | Z Σ ) = 0 . 2 The source key Z Σ is only av ailable to the trusted key generator and is not accessible to any user , relay , or the server . Moreover , the ke ys are independent of the user inputs, i.e., H  Z [ U ] × [ V ] , W [ U ] × [ V ]  = H  Z [ U ] × [ V ]  + X u ∈ [ U ] ,v ∈ [ V ] H ( W u,v ) , (1) H ( W u,v ) = L (in q -ary units) , ∀ ( u, v ) ∈ [ U ] × [ V ] . (2) A. Communication Pr otocol The communication takes place ov er two r ounds , each consisting of two hops. 1) First round. In the first round, over the first hop, User ( u, v ) transmits a message X (1) u,v to its associated Relay u . The message X (1) u,v consists of L (1) X symbols ov er the finite field F q and is generated as a deterministic function of the local input W u,v and the secret key Z u,v , i.e., H  X (1) u,v | W u,v , Z u,v  = 0 , ∀ ( u, v ) ∈ [ U ] × [ V ] . (3) After the first hop of the first round, an arbitrary subset of users may drop out. For Relay u , u ∈ [ U ] , let V (1) u ⊂ M u denote the set of survi ving users after the first hop in the first round, which can be an arbitrary subset of cardinality at least V 0 , i.e., |V (1) u | ≥ V 0 , where 1 ≤ V 0 ≤ V − 1 . Over the second hop, Relay u ∈ [ U ] generates and sends a message Y (1) u to the aggregation server as a deterministic function of the messages { X (1) u,v } ( u,v ) ∈V (1) u recei ved from the surviving users. The message Y (1) u consists of L (1) Y symbols ov er the finite field F q . Thus, H  Y (1) u | { X (1) u,v } ( u,v ) ∈V (1) u  = 0 , ∀ u ∈ [ U ] . (4) Relays may also drop out after the second hop of the first round. Let U (1) denote the set of relays that remain acti ve after the first round, which can be any subset of [ U ] with cardinality at least U 0 , where 1 ≤ U 0 ≤ U − 1 . The server then receives the messages { Y (1) u } u ∈U (1) and aims to compute the sum of the inputs of all surviving users in the first r ound. For brevity of notation, denote [Surviving users in 1 st round] S (1) ∆ = [ u ∈U (1) V (1) u (5) as the set of the surviving users in the first round. The desired sum is then equal to P ( u,v ) ∈S (1) W u,v . Remark 1: Eac h Relay u ∈ [ U ] r eports to the server the set of its surviving users V (1) u . Based on the set of surviving relays U (1) and the r eported user sets {V (1) u } u ∈U (1) , the server determines the complete set of surviving users in the first r ound, S u ∈U (1) V (1) u . The server then br oadcasts this set to all surviving relays, which forwar d it 2 W e assume the existence of a trusted third-party entity responsible for generating and distributing the individual keys to the users. 6 to their associated surviving users. This step informs each surviving user of the set of surviving users in the first r ound, enabling them to generate the appr opriate second-r ound messages. 2) Second round. In the second round, o ver the first hop, each User ( u, v ) that is acti ve at the beginning of the second round, i.e., each ( u, v ) ∈ S (1) , generates and transmits a message X (2) u,v , which is a deterministic function of W u,v and Z u,v , and consists of L (2) X symbols from F q 3 . H  X (2) u,v | W u,v , Z u,v  = 0 , ∀ ( u, v ) ∈ S (1) . (6) After the first hop of the second round, an arbitrary subset of users may drop out. For Relay u ∈ [ U ] , let V (2) u ⊆ V (1) u denote the set of survi ving users after the first hop in the second round. The set V (2) u can be any subset satisfying |V (2) u | ≥ V 0 . Over the second hop, each surviving Relay u ∈ U (1) sends a message Y (2) u to the aggregation server . After the second round, an arbitrary subset of relays may drop out. Let U (2) ⊆ U (1) denote the set of relays that remain acti ve after the second round, where we assume |U (2) | ≥ U 0 . This message is a deterministic function of the messages sent by its surviving users: H  Y (2) u   { X (2) u,v } ( u,v ) ∈V (2) u  = 0 , ∀ u ∈ U (1) . (7) The message Y (2) u consists of L (2) Y symbols ov er the finite field F q . B. Correctness and Security From the messages recei ved from the surviving relays, the aggregation server must be able to recov er the desired sum P ( u,v ) ∈S (1) W u,v with zero error . Formally , for any relay surviv al sets U (1) and U (2) satisfying U (2) ⊆ U (1) ⊆ [ U ] and |U (2) | ≥ U 0 , and for any corresponding user surviv al sets V (1) u and V (2) u satisfying V (2) u ⊆ V (1) u ⊆ { ( u, v ) } v ∈ [ V ] and |V (2) u | ≥ V 0 for all u ∈ U (2) , the following correctness constraint must hold: [Correctness] H   X ( u,v ) ∈S (1) W u,v      { Y (1) u } u ∈U (1) , { Y (2) u } u ∈U (2)   = 0 . (8) The security constraints require that (i) Relay security: each relay should not obtain any information about the users’ inputs W [ U ] × [ V ] , and (ii) Server security: the aggregation server cannot learn any information about W [ U ] × [ V ] beyond the desired sum P ( u,v ) ∈S (1) W u,v , ev en if a relay or the server colludes with any set T of at most T users. For any Relay u ∈ [ U ] and any colluding user set T with |T | ≤ T , under the delayed message av ailability model (see Remark 2), the relay security constraint is giv en by [Relay security] I  n X (1) u,v o ( u,v ) ∈M u , n X (2) u,v o ( u,v ) ∈V (1) u ; W [ U ] × [ V ]    { W u,v , Z u,v } ( u,v ) ∈T  = 0 . (9) 3 W e consider the worst-case message length L (2) X ov er all users and all first-round surviving user sets. 7 Remark 2 (Delayed Message A vailability Model at the Relays): This corresponds to a worst-case adversarial model and leads to a strictly str onger security notion. Thr oughout the security analysis, we adopt a delayed message availability model. Specifically , a r elay is assumed to eventually obtain all uplink messages transmitted by users that wer e active at the be ginning of eac h r ound, even if those users dr op out during the r ound. As a r esult, in the first r ound Relay u is assumed to observe { X (1) u,v } ( u,v ) ∈M u , and in the second r ound it is assumed to observe { X (2) u,v } ( u,v ) ∈V (1) u . This assumption leads to a strictly str onger relay security r equir ement. This model corr esponds to a worst-case timing assumption on message delivery and dr opout events. For any colluding user set T with |T | ≤ T , the server security constraint requires that [Server security] I n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈U (1) ; W [ U ] × [ V ]      X ( u,v ) ∈S (1) W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! = 0 . (10) Remark 3 (Delayed Message A vailability at the Server): In the server security constraint, the ag gr e gation server is assumed to have access to all r elay messages transmitted in the first r ound, including those fr om relays that subsequently dr op out, as well as all second-r ound messages fr om r elays that remain active after the first r ound. This models a delayed message availability scenario in which messages fr om dr opped r elays may be delayed and eventually become available to the server . Under this assumption, the server is allowed to observe { Y (1) u } u ∈ [ U ] and { Y (2) u } u ∈U (1) , leading to a strictly str onger server security r equir ement. C. Communication Rates and Achievable Re gion The communication rate characterizes the number of transmitted symbols per input symbol and is defined as R (1) X ≜ L (1) X L , R (1) Y ≜ L (1) Y L , R (2) X ≜ L (2) X L , R (2) Y ≜ L (2) Y L . (11) Here, R (1) X and R (1) Y denote the message rates of the first and second hops in the first round, respecti vely , while R (2) X and R (2) Y denote the message rates of the first and second hops in the second round. A rate tuple ( R (1) X , R (1) Y , R (2) X , R (2) Y ) is said to be achie vable if there exists a secure aggregation scheme, i.e., a construction of the secret ke ys { Z u,v } ( u,v ) ∈ [ U ] × [ V ] , the messages { X (1) u,v } ( u,v ) ∈ [ U ] × [ V ] , { Y (1) u } u ∈ [ U ] , { X (2) u,v } ( u,v ) ∈V (1) u and { Y (2) u } u ∈U (1) , such that the correctness constraint (8) and the security constraints (9), (10) are satisfied. The optimal rate region, denoted by R ∗ , is defined as the closure of all achiev able rate tuples. In addition, let R (1) , ∗ X , R (1) , ∗ Y , R (2) , ∗ X , and R (2) , ∗ Y denote the individually minimum rates, i.e., R ( i ) , ∗ l ∆ = min { R ( i ) l : ( R (1) X , R (1) Y , R (2) X , R (2) Y ) ∈ R ∗ } , l ∈ { X, Y } , i = 1 , 2 . 8 I I I . M A I N R E S U LT S Theor em 1: For hierarchical secure aggregation with U relays, V users per relay , dropout thresholds V 0 and U 0 , and collusion threshold T , the optimal rate region R ∗ is gi ven by R ∗ =      n  R (1) X , R (1) Y , R (2) X , R (2) Y  : R (1) X ≥ 1 , R (1) Y ≥ 1 , R (2) X ≥ V 0 U 0 V 0 − T , R (2) Y ≥ R (2) , ∗ Y o , if U 0 V 0 > T ∅ , if U 0 V 0 ≤ T (12) where 1 U 0 − ⌊ T /V 0 ⌋ ≤ R (2) , ∗ Y ≤ 1 U 0 − T /V 0 . (13) W e provide intuitiv e explanations to Theorem 1 as follows: 1) Infeasibility: When T ≥ U 0 V 0 , i.e., the minimum number of surviving users is no greater than the collusion threshold, the information-theoretic secure aggregation problem is infeasible. In this regime, the number of independent user contributions is insufficient to simultaneously provide the randomness required for security and the degrees of freedom required for correct decoding. Hence, the correctness constraint (8) and the security constraints (9)(10) cannot be satisfied simultaneously . 2) F irst r ound rates R (1) , ∗ X = R (1) , ∗ Y = 1 : In the first round, each user and relay essentially transmits its input in full. Since this round is used to collect raw information from all users before any dropout occurs, no coding gain is possible. Hence, the rates are equal to 1 . 3) Second r ound user rate R (2) , ∗ X = V 0 U 0 V 0 − T : In the second round, surviving users send enough information so that the server can reconstruct the sum of all surviving inputs while remaining secure against any collusion of at most T users. The numerator V 0 represents the minimum number of users per relay that surviv e, and the denominator U 0 V 0 − T corresponds to the effecti ve number of independent contributions after accounting for colluding users. This choice ensures information-theoretic security while minimizing redundancy . 4) Second r ound relay rate R (2) , ∗ Y bounds: Each relay aggregates its surviving users’ messages and forwards them to the serv er . The bounds 1 U 0 −⌊ T /V 0 ⌋ ≤ R (2) , ∗ Y ≤ 1 U 0 − T /V 0 consist of a con verse lo wer bound R (2) , ∗ Y ≥ 1 U 0 −⌊ T /V 0 ⌋ and an achiev able upper bound R (2) , ∗ Y ≤ 1 U 0 − T /V 0 . Note that the upper and lower bounds coincide when either V 0 di vides T or T = 0 (no collusion), thus characterizing the optimal rate region in such scenarios. I V . A C H I E V A B I L I T Y P R O O F O F T H E O R E M 1 Prior to the formal proof of the general achiev ability , we demonstrate the approach through two illustrativ e examples. The method is relati vely simple and is based on standard vector linear coding techniques. A. Example 1: U = 2 , V = 2 , U 0 = 2 , V 0 = 1 , T = 0 Consider U V = 4 users. Each relay has at least V 0 = 1 responding user , and the server receiv es responses from at least U 0 = 2 relays. There is no collusion between the users and the server or the relays, i.e., T = 0 . The input 9 length is set to L = U 0 V 0 − T = 2 , so that each message satisfies W u,v ∈ F 2 × 1 q . T o specify the correlated randomness, generate 4 i.i.d. uniform random vectors { N u,v } ( u,v ) ∈ [2] × [2] , where N u,v ∈ F 2 × 1 q . W e construct linear combinations corresponding to the sums of all subsets of { N 1 , 1 , N 1 , 2 , N 2 , 1 , N 2 , 2 } with cardinality at least U 0 V 0 = 2 . For each User ( u, v ) , define the randomness variable as Z u,v =  N u,v ,  N i,j (1) + 2 2 u + v − 3 N i,j (2)  ( i,j ) ∈ [2] × [2]  , ∀ ( u, v ) ∈ [2] × [2] . (14) Example: For ( u, v ) = (2 , 1) , we hav e Z 2 , 1 = ( N 2 , 1 , N 1 , 1 (1) + 2 2 N 1 , 1 (2) , N 1 , 2 (1) + 2 2 N 1 , 2 (2) , N 2 , 1 (1) + 2 2 N 2 , 1 (2) , N 2 , 2 (1) + 2 2 N 2 , 2 (2)) . (15) This completes the construction of the correlated secret ke ys. W e next describe the message design over two communication rounds. First hop, first round: each user transmits its input masked by local noise: X (1) u,v = W u,v + N u,v , ∀ ( u, v ) ∈ [2] × [2] . (16) Second hop, first round: each relay aggregates messages from its surviving users V (1) u ⊆ M u : Y (1) u = X ( u,v ) ∈V (1) u  W u,v + N u,v  , u ∈ [2] . (17) Example surviving user combinations: • V (1) 1 = { (1 , 1) } , V (1) 2 = { (2 , 2) } : Y (1) 1 = W 1 , 1 + N 1 , 1 , Y (1) 2 = W 2 , 2 + N 2 , 2 . (18) • V (1) 1 = { (1 , 1) , (1 , 2) } , V (1) 2 = { (2 , 1) } : Y (1) 1 = W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , Y (1) 2 = W 2 , 1 + N 2 , 1 . (19) • V (1) 1 = { (1 , 1) , (1 , 2) } , V (1) 2 = { (2 , 1) , (2 , 2) } : Y (1) 1 = W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , Y (1) 2 = W 2 , 1 + W 2 , 2 + N 2 , 1 + N 2 , 2 . (20) After the first round, each Relay u ∈ [2] reports to the server the set of its survi ving users V (1) u . Based on the set of survi ving relays U (1) and the reported user sets {V (1) u } u ∈U (1) , the server determines the complete set of users that surviv e the first round, denoted by S (1) ≜ ∪ u ∈U (1) V (1) u . The server then broadcasts S (1) to all surviving relays, which forward it to their associated surviving users and request the transmission of the second-round messages. Since U = U 0 = 2 , we hav e U (1) = U (2) = { 1 , 2 } . 10 First hop, second r ound: each surviving User ( u, v ) ∈ S (1) transmits a linear combination of all survi ving users’ randomness: X (2) u,v = X ( i,j ) ∈S (1)  N i,j (1) + 2 2 u + v − 3 N i,j (2)  . (21) Example: • S (1) = { (1 , 1) , (2 , 2) } : X (2) 1 , 1 = N 1 , 1 (1) + N 1 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) , X (2) 2 , 2 = N 1 , 1 (1) + 2 3 N 1 , 1 (2) + N 2 , 2 (1) + 2 3 N 2 , 2 (2) . (22) • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) } : X (2) 1 , 1 = N 1 , 1 (1) + N 1 , 1 (2) + N 1 , 2 (1) + N 1 , 2 (2) + N 2 , 1 (1) + N 2 , 1 (2) , X (2) 1 , 2 = N 1 , 1 (1) + 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 N 2 , 1 (2) , X (2) 2 , 1 = N 1 , 1 (1) + 2 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 2 N 2 , 1 (2) . (23) • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) } : X (2) 1 , 1 = N 1 , 1 (1) + N 1 , 1 (2) + N 1 , 2 (1) + N 1 , 2 (2) + N 2 , 1 (1) + N 2 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) , X (2) 1 , 2 = N 1 , 1 (1) + 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 N 2 , 1 (2) + N 2 , 2 (1) + 2 N 2 , 2 (2) , X (2) 2 , 1 = N 1 , 1 (1) + 2 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 2 N 2 , 1 (2) + N 2 , 2 (1) + 2 2 N 2 , 2 (2) , X (2) 2 , 2 = N 1 , 1 (1) + 2 3 N 1 , 1 (2) + N 1 , 2 (1) + 2 3 N 1 , 2 (2) + N 2 , 1 (1) + 2 3 N 2 , 1 (2) + N 2 , 2 (1) + 2 3 N 2 , 2 (2) . (24) Second hop, second round: each relay aggregates V 0 messages from its surviving users V (2) u . Let Q u ⊆ V (2) u , |Q u | = V 0 , then Y (2) u = { X (2) u,v } v ∈Q u , u ∈ [2] . (25) Example: • S (1) = { (1 , 1) , (2 , 2) } , V (2) 1 = { (1 , 1) } , V (2) 2 = { (2 , 2) } : Y (2) 1 = { X (2) 1 , 1 } = { N 1 , 1 (1) + N 1 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) } , Y (2) 2 = { X (2) 2 , 2 } = { N 1 , 1 (1) + 2 3 N 1 , 1 (2) + N 2 , 2 (1) + 2 3 N 2 , 2 (2) } . (26) • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) } , V (2) 1 = { (1 , 1) , (1 , 2) } , V (2) 2 = { (2 , 1) } : Y (2) 1 = { X (2) 1 , 1 } = { N 1 , 1 (1) + N 1 , 1 (2) + N 1 , 2 (1) + N 1 , 2 (2) + N 2 , 1 (1) + N 2 , 1 (2) } , Y (2) 2 = { X (2) 2 , 1 } = { N 1 , 1 (1) + 2 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 2 N 2 , 1 (2) } . (27) 11 • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) } , V (2) 1 = { (1 , 1) , (1 , 2) } , V (2) 2 = { (2 , 1) , (2 , 2) } : Y (2) 1 = { X (2) 1 , 1 } = { N 1 , 1 (1) + N 1 , 1 (2) + N 1 , 2 (1) + N 1 , 2 (2) + N 2 , 1 (1) + N 2 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) } , Y (2) 2 = { X (2) 2 , 1 } = { N 1 , 1 (1) + 2 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 2 N 2 , 1 (2) + N 2 , 2 (1) + 2 2 N 2 , 2 (2) } . (28) Based on the abov e construction, we next analyze the achiev able rates of the scheme and sho w that, under these rates, both correctness and security requirements are satisfied. Rate: Finally , we specify the communication rates of the proposed scheme. The input length of each user is L = 2 symbols over F q . In the first round, each user transmits L (1) X = 2 symbols and each relay forwards L (1) Y = 2 symbols. Hence, the first-round rates are R (1) X = L (1) X L = 1 , R (1) Y = L (1) Y L = 1 . In the second round, each user transmits one symbol and each relay also forwards one symbol, i.e., L (2) X = L (2) Y = 1 . Therefore, the second-round rates are R (2) X = L (2) X L = 1 2 , R (2) Y = L (2) Y L = 1 2 . Equi valently , these rates can be expressed in terms of the system parameters as R (2) X = 1 U 0 V 0 − T , R (2) Y = 1 U 0 − T V 0 , which e valuate to 1 / 2 for the considered setting ( U 0 , V 0 , T ) = (2 , 1 , 0) . W ith these rates, the proposed scheme satisfies both the correctness and the security requirements, as shown below . Corr ectness: Let S (1) = V (1) 1 ∪ V (1) 2 denote the set of survi ving users after the first hop of the first round. Let V (2) u ⊆ V (1) u ⊆ S (1) denote the surviving users in Relay u for the second round. Since V 0 = 1 and U 0 = 2 , we hav e: |V (1) u | ≥ V 0 , |V (2) u | ≥ V 0 , |U (1) | = |U (2) | = U 0 , ∀ u ∈ [2] . At least U 0 relay surviving and each relay can always select V 0 messages from the users to send. From the second round messages { Y (2) u } u ∈ [2] (see (25)), the serv er can recover the aggregate key P ( i,j ) ∈S (1) N i,j =  P ( i,j ) ∈S (1) N i,j (1) , P ( i,j ) ∈S (1) N i,j (2)  with no error , because the coef ficients in (14) satisfy the MDS property . Combining with the sum of the first round messages Y (1) 1 + Y (1) 2 = P ( i,j ) ∈S (1)  W i,j + N i,j  , the server can decode the desired sum P ( i,j ) ∈S (1) W i,j with no error . Thus, the scheme is correct for all possible user dropouts in both rounds. Example cases: • S (1) = { (1 , 1) , (2 , 1) } : Users (1 , 2) and (2 , 2) drop in the first round. The server reco vers N 1 , 1 + N 2 , 1 = ( N 1 , 1 (1) + N 2 , 1 (1) , N 1 , 1 (2) + N 2 , 1 (2)) from Y (2) 1 , Y (2) 2 (see (26)) and then W 1 , 1 + W 2 , 1 = Y (1) 1 + Y (1) 2 − ( N 1 , 1 + N 2 , 1 ) (see (18)). • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) } : User (2 , 2) drops. If one user per relay drops in the second round, the server still recovers N 1 , 1 + N 1 , 2 + N 2 , 1 = ( N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) , N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2)) (see (26)) and then W 1 , 1 + W 1 , 2 + W 2 , 1 = Y (1) 1 + Y (1) 2 − ( N 1 , 1 + N 1 , 2 + N 2 , 1 ) (see (19)). • S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) } : No users drop in the first round. The server recov ers N 1 , 1 + N 1 , 2 + N 2 , 1 + N 2 , 2 = ( N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) + N 2 , 2 (1) , N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2) + N 2 , 2 (2)) (see (28)) and then W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 = Y (1) 1 + Y (1) 2 − ( N 1 , 1 + N 1 , 2 + N 2 , 1 + N 2 , 2 ) (see (20)). 12 Security: The security of the proposed scheme relies on the follo wing intuition. In the first round, each user message is masked by independent randomness, which guarantees information-theoretic security . In the second round, the relays transmit carefully designed linear combinations of the randomness symbols, which provide exactly the amount of side information needed to recover the desired sum, and no more. T o verify the security constraints, it suf fices to consider one representativ e admissible realization. Specifically , we consider the case S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) } , V (2) 1 = { (1 , 1) , (1 , 2) } , V (2) 2 = { (2 , 1) , (2 , 2) } . W e show that both the relay security constraint (9) and the server security constraint (10) are satisfied. Relay security: Consider relay 1, we hav e I ( X (1) 1 , 1 , X (1) 1 , 2 , X (2) 1 , 1 , X (2) 1 , 2 ; W 1 , 1 , W 1 , 2 , W 2 , 1 , W 2 , 2 ) = H ( X (1) 1 , 1 , X (1) 1 , 2 , X (2) 1 , 1 , X (2) 1 , 2 ) − H ( X (1) 1 , 1 , X (1) 1 , 2 , X (2) 1 , 1 , X (2) 1 , 2 | W 1 , 1 , W 1 , 2 , W 2 , 1 , W 2 , 2 ) ≤ 6 − H ( N 1 , 1 , N 1 , 2 , N 2 , 1 (1) + N 2 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) , N 2 , 1 (1) + 2 N 2 , 1 (2) + N 2 , 2 (1) + 2 N 2 , 2 (2) | W 1 , 1 , W 1 , 2 , W 2 , 1 , W 2 , 2 ) =6 − 6 = 0 . Hence, relay 1 obtains no information about the users’ messages, and the relay security constraint is satisfied. The proofs for the other relays follow similarly . Server security: The tar get sum that the server is allowed to learn is W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 . T o verify server security , we consider the conditional mutual information: I  W 1 , 1 , W 1 , 2 , W 2 , 1 , W 2 , 2 ; Y (1) 1 , Y (1) 2 , Y (2) 1 , Y (2) 2   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2  = H ( W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , W 2 , 1 + W 2 , 2 + N 2 , 1 + N 2 , 2 , N 1 , 1 (1) + N 1 , 1 (2) + N 1 , 2 (1) + N 1 , 2 (2) + N 2 , 1 (1) + N 2 , 1 (2) + N 2 , 2 (1) + N 2 , 2 (2) , N 1 , 1 (1) + 2 2 N 1 , 1 (2) + N 1 , 2 (1) + 2 2 N 1 , 2 (2) + N 2 , 1 (1) + 2 2 N 2 , 1 (2) + N 2 , 2 (1) + 2 2 N 2 , 2 (2)   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 ) − H ( N 1 , 1 + N 1 , 2 , N 2 , 1 + N 2 , 2 , N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) + N 2 , 2 (1) , N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2) + N 2 , 2 (2)   W 1 , 1 , W 1 , 2 , W 2 , 1 , W 2 , 2 ) (29) = H ( W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , W 2 , 1 + W 2 , 2 + N 2 , 1 + N 2 , 2   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 ) − H ( N 1 , 1 + N 1 , 2 , N 2 , 1 + N 2 , 2 ) (30) ≤ 4 − 4 = 0 . (31) Therefore, the server learns no additional information about the individual messages beyond the desired sum, and the server security constraint is satisfied. 13 In this representativ e case, both the relay security and server security constraints hold. Other admissible re- alizations differ only in the specific indices in volved, while the entropy structure and the independence of the randomness remain unchanged. Hence, they can be analyzed in the same manner and lead to the same conclusion. B. Example 2: U = 3 , V = 3 , U 0 = 2 , V 0 = 2 , T = 2 W e consider a system with U V = 9 users. Each relay recei ves messages from at least V 0 = 2 users, and the server receiv es messages from at least U 0 = 2 relays. Furthermore, at most T = 2 users may collude with the server or with any relay . W e choose the input length as L = U 0 V 0 − T = 2 . Accordingly , each user input is giv en by W u,v =  W u,v (1) , W u,v (2)  ⊤ ∈ F 2 × 1 q . Next, we define the correlated randomness. Each User ( u, v ) ∈ [3] × [3] has two independent and uniformly distributed random vectors o ver F q : N u,v =  N u,v (1) , N u,v (2)  ⊤ ∈ F 2 × 1 q , and S u,v =  S u,v (1) , S u,v (2)  ⊤ ∈ F 2 × 1 q . In total, there are 2 U V = 18 such vectors across all users. From these local random vectors, we construct U V = 9 generic linearly independent linear combinations, which will be used to encode additional randomness accessible to each user . Specifically , for each User ( u, v ) , the av ailable randomness is defined as Z u,v = N u,v , n ( N i,j (1) , N i,j (2) , S i,j (1) , S i,j (2)) · α u,v o ( i,j ) ∈ [3] × [3] ! , (32) where α u,v ∈ F 4 × 1 q denotes the (3( u − 1) + v ) -th column of a 4 × 9 MDS matrix α . Here, the multiplication is a standard ro w-vector times column-vector product, producing a scalar for each ( i, j ) . In other w ords, we instantiate the abstract MDS column vector α u,v as α u,v = (1 , 2 3( u − 1)+ v , 3 3( u − 1)+ v , 4 3( u − 1)+ v ) ⊤ for the inner product computation. This construction ensures that each user has access to both priv ate and global linear combinations of the randomness, which is essential for achie ving the desired security and recov erability properties in the system. For example, when u = 1 and v = 3 , the available randomness for user (1 , 3) is Z 1 , 3 = ( N 1 , 3 , N 1 , 1 (1) + 2 2 N 1 , 1 (2) + 3 2 S 1 , 1 (1) + 4 2 S 1 , 1 (2) , N 1 , 2 (1) + 2 2 N 1 , 2 (2) + 3 2 S 1 , 2 (1) + 4 2 S 1 , 2 (2) , N 1 , 3 (1) + 2 2 N 1 , 3 (2) + 3 2 S 1 , 3 (1) + 4 2 S 1 , 3 (2) , N 2 , 3 (1) + 2 2 N 2 , 3 (2) + 3 2 S 2 , 3 (1) + 4 2 S 2 , 3 (2) , N 3 , 1 (1) + 2 2 N 3 , 1 (2) + 3 2 S 3 , 1 (1) + 4 2 S 3 , 1 (2) , N 3 , 2 (1) + 2 2 N 3 , 2 (2) + 3 2 S 3 , 2 (1) + 4 2 S 3 , 2 (2) , N 3 , 3 (1) + 2 2 N 3 , 3 (2) + 3 2 S 3 , 3 (1) + 4 2 S 3 , 3 (2)) . (33) W e hav e no w completed the design of the correlated secret keys. W e next describe how the messages are constructed ov er the two communication rounds. First round, first hop: Each user transmits X (1) u,v = W u,v + N u,v , ∀ ( u, v ) ∈ [3] × [3] . (34) 14 First round, second hop: For any V (1) u ⊆ { ( u, v ) } v ∈ [3] with |V (1) u | ≥ V 0 = 2 , Relay u computes Y (1) u , and send to the server . Y (1) u = X ( u,v ) ∈V (1) u  W u,v + N u,v  , ∀ u ∈ [3] . (35) For example, if S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) , (2 , 3) , (3 , 1) , (3 , 3) } , we set Y (1) 1 = W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , Y (1) 2 = W 2 , 1 + W 2 , 2 + W 2 , 3 + N 2 , 1 + N 2 , 2 + N 2 , 3 , Y (1) 3 = W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 . (36) After the first round, each relay reports to the serv er the set of its survi ving users V (1) u . Based on the set of survi ving relays U (1) and the reported user sets {V (1) u } u ∈U (1) , the server determines the complete set of users that survi ve the first round, denoted by S (1) ≜ ∪ u ∈U (1) V (1) u . The server then broadcasts S (1) to all surviving relays, which forward it to their associated survi ving users and request the transmission of the second-round messages. Second round, first hop: For each survi ving User ( u, v ) ∈ S (1) , we set X (2) u,v = X ( i,j ) ∈S (1)  ( N i,j (1) , N i,j (2) , S i,j (1) , S i,j (2)) · α u,v  = X ( i,j ) ∈S (1)  N i,j (1) + 2 3 u + v − 4 N i,j (2) + 3 3 u + v − 4 S i,j (1) + 4 3 u + v − 4 S i,j (2)  . (37) For example, if S (1) = (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) , (2 , 3) , (3 , 1) , (3 , 3) and for ( u, v ) = (2 , 1) , we set X (2) 2 , 1 = N 1 , 1 (1) + 2 3 N 1 , 1 (2) + 3 3 S 1 , 1 (1) + 4 3 S 1 , 1 (2) + N 1 , 2 (1) + 2 3 N 1 , 2 (2) + 3 3 S 1 , 2 (1) + 4 3 S 1 , 2 (2)+ N 2 , 1 (1) + 2 3 N 2 , 1 (2) + 3 3 S 2 , 1 (1) + 4 3 S 2 , 1 (2) + N 2 , 2 (1) + 2 3 N 2 , 2 (2) + 3 3 S 2 , 2 (1) + 4 3 S 2 , 2 (2)+ N 2 , 3 (1) + 2 3 N 2 , 3 (2) + 3 3 S 2 , 3 (1) + 4 3 S 2 , 3 (2) + N 3 , 1 (1) + 2 3 N 3 , 1 (2) + 3 3 S 3 , 1 (1) + 4 3 S 3 , 1 (2)+ N 3 , 3 (1) + 2 3 N 3 , 3 (2) + 3 3 S 3 , 3 (1) + 4 3 S 3 , 3 (2) (38) Second round, second hop: Since |V (2) u | ≥ V 0 , each Relay u selects a subset Q u ⊆ V (2) u with |Q u | = V 0 and forwards the corresponding messages to the server . Specifically , Y (2) u = { X (2) u,v } v ∈Q u , u ∈ [3] . (39) For example, if V (2) 2 = { (2 , 1) , (2 , 2) , (2 , 3) } , we choose Q 2 = { (2 , 1) , (2 , 2) } , and set Y (2) 1 = { X (2) 1 , 1 , X (2) 1 , 2 } , Y (2) 2 = { X (2) 2 , 1 , X (2) 2 , 2 } , 15 Y (2) 3 = { X (2) 3 , 1 , X (2) 3 , 3 } . (40) W e next analyze the achiev able rates and establish correctness and security . Rate: Finally , we specify the communication rates of the proposed scheme. The first-round rates are giv en by R (1) X = L (1) X L = 1 , R (1) Y = L (1) Y L = 1 . For the second round, we hav e R (2) X = L (2) X L = 1 2 , R (2) Y = L (2) Y L = 1 . Equi valently , the second-round rates can be e xpressed in terms of the system parameters as R (2) X = 1 U 0 V 0 − T , R (2) Y = 1 U 0 − T V 0 . W ith these rates, the proposed scheme satisfies both the correctness and the security requirements, as shown below . Corr ectness: Let S (1) = V (1) 1 ∪ V (1) 2 ∪ V (1) 3 denote the set of surviving users after the first hop of the first round, and let V (2) u ⊆ V (1) u ⊆ S (1) denote the set of survi ving users at Relay u in the second round. Since V 0 = 2 and U 0 = 2 , it holds that |V (1) u | ≥ V 0 , |V (2) u | ≥ V 0 , and |U (1) | , |U (2) | ≥ U 0 for all u ∈ [3] , which implies that at least U 0 relays surviv e and each surviving relay can alw ays select V 0 user messages for transmission. From the second round messages { Y (2) u } u ∈U (2) , the server can recov er the aggregation keys P ( i,j ) ∈S (1) N i,j =  P ( i,j ) ∈S (1) N i,j (1) , P ( i,j ) ∈S (1) N i,j (2)  and P ( i,j ) ∈S (1) S i,j =  P ( i,j ) ∈S (1) S i,j (1) , P ( i,j ) ∈S (1) S i,j (2)  with no error , because the precoding matrices in (32) are MDS. Combining with the sum of the first round messages Y (1) 1 + Y (1) 2 = P ( i,j ) ∈S (1)  W i,j + N i,j  , the server can decode the desired sum P ( i,j ) ∈S (1) W i,j . Example Case: Suppose V (1) 1 ∪ V (1) 2 ∪ V (1) 3 = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) , (2 , 3) , (3 , 1) , (3 , 3) } , i.e., users { (1 , 3) , (3 , 2) } drop out in the first hop of the first round. Assume that U (1) = { 1 , 2 , 3 } , i.e., no relay drops out in the first round, and that U (2) = { 1 , 2 } . From Y (2) 1 and Y (2) 2 , the server can recover P ( i,j ) ∈S (1) N i,j =  P ( i,j ) ∈S (1) N i,j (1) , P ( i,j ) ∈S (1) N i,j (2)  and P ( i,j ) ∈S (1) S i,j =  P ( i,j ) ∈S (1) S i,j (1) , P ( i,j ) ∈S (1) S i,j (2)  without er- ror , again due to the MDS property of the precoding matrices in (34). Combining these with Y (1) 1 + Y (1) 2 = P ( u,v ) ∈S (1) W u,v + P ( u,v ) ∈S (1) N u,v , the desired sum P ( u,v ) ∈S (1) W u,v can be decoded with zero error . The correctness proof for other admissible dropout patterns follows in the same manner . Security: T o verify the security constraints, it suf fices to consider one representati ve admissible realization. Specif- ically , we consider the case S (1) = { (1 , 1) , (1 , 2) , (2 , 1) , (2 , 2) , (2 , 3) , (3 , 1) , (3 , 3) } , V (2) 1 = { (1 , 1) , (1 , 2) } , V (2) 2 = { (2 , 1) , (2 , 2) } , U (2) = { 1 , 2 } , and T = { (1 , 1) , (2 , 1) } . W e show that both the relay security constraint (9) and the server security constraint (10) are satisfied. Relay security: Consider relay 1. W e have I ( X (1) 1 , 1 , X (1) 1 , 2 , X (1) 1 , 3 , X (2) 1 , 1 , X (2) 1 , 2 ; { W u,v } ( u,v ) ∈ [3] × [3] | W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) (41) = H ( X (1) 1 , 1 , X (1) 1 , 2 , X (1) 1 , 3 , X (2) 1 , 1 , X (2) 1 , 2 | W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) − H ( X (1) 1 , 1 , X (1) 1 , 2 , X (1) 1 , 3 , X (2) 1 , 1 , X (2) 1 , 2 |{ W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) (42) = H ( W 1 , 2 + Z 1 , 2 , W 1 , 3 + Z 1 , 3 , N 1 , 1 (1) + 2 N 1 , 1 (2) + 3 S 1 , 1 (1) + 4 S 1 , 1 (2) + N 1 , 2 (1) + 2 N 1 , 2 (2) + 3 S 1 , 2 (1) + 4 S 1 , 2 (2)+ N 2 , 1 (1) + 2 N 2 , 1 (2) + 3 S 2 , 1 (1) + 4 S 2 , 1 (2) + N 2 , 2 (1) + 2 N 2 , 2 (2) + 3 S 2 , 2 (1) + 4 S 2 , 2 (2)+ 16 N 2 , 3 (1) + 2 N 2 , 3 (2) + 3 S 2 , 3 (1) + 4 S 2 , 3 (2) + N 3 , 1 (1) + 2 N 3 , 1 (2) + 3 S 3 , 1 (1) + 4 S 3 , 1 (2)+ N 3 , 3 (1) + 2 N 3 , 3 (2) + 3 S 3 , 3 (1) + 4 S 3 , 3 (2) | W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) − H ( Z 1 , 2 , Z 1 , 3 , N 1 , 1 (1) + 2 N 1 , 1 (2) + 3 S 1 , 1 (1) + 4 S 1 , 1 (2) + N 1 , 2 (1) + 2 N 1 , 2 (2) + 3 S 1 , 2 (1) + 4 S 1 , 2 (2)+ N 2 , 1 (1) + 2 N 2 , 1 (2) + 3 S 2 , 1 (1) + 4 S 2 , 1 (2) + N 2 , 2 (1) + 2 N 2 , 2 (2) + 3 S 2 , 2 (1) + 4 S 2 , 2 (2)+ N 2 , 3 (1) + 2 N 2 , 3 (2) + 3 S 2 , 3 (1) + 4 S 2 , 3 (2) + N 3 , 1 (1) + 2 N 3 , 1 (2) + 3 S 3 , 1 (1) + 4 S 3 , 1 (2)+ N 3 , 3 (1) + 2 N 3 , 3 (2) + 3 S 3 , 3 (1) + 4 S 3 , 3 (2) |{ W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) (43) =5 − 5 = 0 . (44) The equality follows since the two entrop y terms in volv e the same set of independent linear combinations. Conditioning on all W u,v remov es only the data components while preserving the independent randomness, and each remaining term contributes one q -ary symbol. Hence, relay 1 obtains no information about the users’ messages, and the relay security constraint is satisfied. Server security: For server security , we hav e I  { W u,v } ( u,v ) ∈ [3] × [3] ; Y (1) 1 , Y (1) 2 , Y (1) 3 , Y (2) 1 , Y (2) 2 , Y (2) 3   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 + W 2 , 3 + W 3 , 1 + W 3 , 3 , W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1  = H ( W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , W 2 , 1 + W 2 , 2 + W 2 , 3 + N 2 , 1 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 , N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) + N 2 , 2 (1) + N 2 , 3 (1) + N 3 , 1 (1) + N 3 , 3 (1) , N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2) + N 2 , 2 (2) + N 2 , 3 (2) + N 3 , 1 (2) + N 3 , 3 (2) , S 1 , 1 (1) + S 1 , 2 (1) + S 2 , 1 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 1 (2) + S 1 , 2 (2) + S 2 , 1 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 2 + W 2 , 3 + W 3 , 1 + W 3 , 3 , W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) − H ( N 1 , 1 + N 1 , 2 , N 2 , 1 + N 2 , 2 + N 2 , 3 , N 3 , 1 + N 3 , 3 , N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) + N 2 , 2 (1) + N 2 , 3 (1) + N 3 , 1 (1) + N 3 , 3 (1) , N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2) + N 2 , 2 (2) + N 2 , 3 (2) + N 3 , 1 (2) + N 3 , 3 (2) , S 1 , 1 (1) + S 1 , 2 (1) + S 2 , 1 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 1 (2) + S 1 , 2 (2) + S 2 , 1 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   17 { W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) (45) = H ( W 1 , 2 + N 1 , 2 , W 2 , 2 + W 2 , 3 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 , S 1 , 2 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 2 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 3 + W 3 , 1 + W 3 , 3 , W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) − H ( N 1 , 2 , N 2 , 2 + N 2 , 3 , N 3 , 1 + N 3 , 3 , S 1 , 2 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 2 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   { W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) (46) = H ( W 1 , 2 + N 1 , 2 , W 2 , 2 + W 2 , 3 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3   W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 3 + W 3 , 1 + W 3 , 3 , W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) + H ( S 1 , 2 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 2 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   W 1 , 2 + N 1 , 2 , W 2 , 2 + W 2 , 3 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 , W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 3 + W 3 , 1 + W 3 , 3 , W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 ) − H ( N 1 , 2 , N 2 , 2 + N 2 , 3 , N 3 , 1 + N 3 , 3   { W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) − H ( S 1 , 2 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) , S 1 , 2 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2)   N 1 , 2 , N 2 , 2 + N 2 , 3 , N 3 , 1 + N 3 , 3 , { W u,v } ( u,v ) ∈ [3] × [3] , Z 1 , 1 , Z 2 , 1 ) (47) =3 × 2 − 3 × 2 = 0 , (48) where the first term of (46) holds because the quantities N 1 , 1 (1) + N 1 , 2 (1) + N 2 , 1 (1) + N 2 , 2 (1) + N 2 , 3 (1) + N 3 , 1 (1) + N 3 , 3 (1) and N 1 , 1 (2) + N 1 , 2 (2) + N 2 , 1 (2) + N 2 , 2 (2) + N 2 , 3 (2) + N 3 , 1 (2) + N 3 , 3 (2) are fully determined by the sums W 1 , 1 + W 1 , 2 + N 1 , 1 + N 1 , 2 , W 2 , 1 + W 2 , 2 + W 2 , 3 + N 2 , 1 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 , and the target sum W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 3 + W 3 , 1 + W 3 , 3 . The second term of (47) is zero because the sums S 1 , 2 (1) + S 2 , 2 (1) + S 2 , 3 (1) + S 3 , 1 (1) + S 3 , 3 (1) and S 1 , 2 (2) + S 2 , 2 (2) + S 2 , 3 (2) + S 3 , 1 (2) + S 3 , 3 (2) are fully determined by W 1 , 2 + N 1 , 2 , W 2 , 2 + W 2 , 3 + N 2 , 2 + N 2 , 3 , W 3 , 1 + W 3 , 3 + N 3 , 1 + N 3 , 3 , the target sum W 1 , 1 + W 1 , 2 + W 2 , 1 + W 2 , 3 + W 3 , 1 + W 3 , 3 , and the colluding users’ information W 1 , 1 , Z 1 , 1 , W 2 , 1 , Z 2 , 1 . W e now e xtend the above e xample to a general scheme applicable to arbitrary system parameters, while maintaining the same correctness and security properties. C. General Scheme for Arbitrary U, V , U 0 , V 0 , T W e consider a system with U V users index ed by ( u, v ) ∈ [ U ] × [ V ] . Each relay receives messages from at least V 0 users, and the server receiv es messages from at least U 0 relays. Moreover , at most T users may collude with 18 the server or with any relay . Accordingly , we choose the input length as L = U 0 V 0 − T . Each User ( u, v ) holds an input vector W u,v =  W u,v (1) , · · · , W u,v ( U 0 V 0 − T )  ⊤ ∈ F ( U 0 V 0 − T ) × 1 q . W e first specify the correlated secret keys used in the scheme. Consider a total of 2 U V independent and uniformly distributed random v ectors over F q , giv en by N u,v =  N u,v (1) , · · · , N u,v ( U 0 V 0 − T )  ⊤ ∈ F ( U 0 V 0 − T ) × 1 q , and S u,v =  S u,v (1) , · · · , S u,v ( T )  ⊤ ∈ F T × 1 q , for all ( u, v ) ∈ [ U ] × [ V ] . From these secret ke ys, we construct a collection of linearly independent linear combinations over F q , parameterized by a carefully designed MDS matrix. Specifically , for each ( u, v ) ∈ [ U ] × [ V ] , the randomness av ailable at User ( u, v ) is defined as Z u,v =  N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ]  . (49) Here, [ Q i,j ] u,v ≜  N i,j (1) , · · · , N i,j ( U 0 V 0 − T ) , S i,j (1) , · · · , S i,j ( T )  α u,v ∈ F q , where α u,v ∈ F U 0 V 0 × 1 q denotes the ( V ( u − 1) + v ) th column of a matrix α ∈ F U 0 V 0 × U V q . W e say that a matrix α ∈ F U 0 V 0 × U V q with U 0 V 0 < U V is an MDS matrix if any U 0 V 0 × U 0 V 0 submatrix is nonsingular . Furthermore, α is said to be a T -priv acy MDS matrix if the submatrix formed by its last T rows is also MDS. Such a matrix exists for sufficiently large field size q . A T -priv ate MDS matrix [18] guarantees that, for any subset T ⊆ [ U ] × [ V ] with |T | ≤ T , the collection of linear projections { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T is statistically independent of the masking variables { N i,j } ( i,j ) ∈ [ U ] × [ V ] , i.e., I  { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ; { N i,j } ( i,j ) ∈ [ U ] × [ V ]  = 0 . (50) This follo ws from the fact that the submatrix formed by the last T rows of α is MDS, which ensures that any set of at most T such projections depends only on the randomness vectors { S i,j } and rev eals no information about { N i,j } . T o facilitate the subsequent security analysis, we next summarize several useful entropy properties of the secret ke ys in the follo wing lemma. Lemma 1: For any subset T ⊆ [ U ] × [ V ] with |T | ≤ T , any V (2) u ⊆ V (1) u ⊆ { ( u, v ) } v ∈ [ V ] satisfying |V (2) u | ≥ V 0 for all u ∈ [ U ] , and any U (2) ⊆ U (1) ⊆ [ U ] satisfying |U (2) | ≥ U 0 , with U 0 V 0 > T , we hav e I  { N i,j } ( i,j ) ∈ ([ U ] × [ V ]) \T ; { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T  = 0 . (51) Pr oof: Since |T | ≤ T , the T -priv acy property established in (50) applies. I  { N i,j } ( i,j ) ∈ ([ U ] × [ V ]) \T ; { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T  (52) = H  { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T  − H  { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T |{ N i,j } ( i,j ) ∈ ([ U ] × [ V ]) \T  (53) 19 = H  { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T  − H  { N i,j } ( i,j ) ∈T |{ N i,j } ( i,j ) ∈ ([ U ] × [ V ]) \T  − H  { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T |{ N i,j } ( i,j ) ∈ [ U ] × [ V ]  (54) = H ( { N i,j } ( i,j ) ∈T , { [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ) − H ( { N i,j } ( i,j ) ∈T ) − H ( { [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ) + I ( { [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ; { N i,j } ( i,j ) ∈ [ U ] × [ V ] ) | {z } (50) = 0 (55) ≤ H ( { N i,j } ( i,j ) ∈T ) + H ( { [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ) − H ( { N i,j } ( i,j ) ∈T ) − H ( { [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ) (56) =0 , (57) where the fourth term in (55) equals zero by (50), which follows from the T -priv acy property of the MDS matrix α . W ith the correlated secret ke ys fully specified, we proceed to describe the message transmissions ov er two rounds. First round, first hop: Each user transmits X (1) u,v = W u,v + N u,v , ∀ ( u, v ) ∈ [ U ] × [ V ] . (58) First round, second hop: F or any V (1) u ⊆ { ( u, v ) } v ∈ [ V ] with |V (1) u | ≥ V 0 = 2 , Relay u computes Y (1) u and forwards it to the server . Y (1) u = X ( u,v ) ∈V (1) u  W u,v + N u,v  , ∀ u ∈ [ U ] . (59) After the first round, each relay reports to the serv er the set of its survi ving users V (1) u . Based on the set of survi ving relays U (1) and the reported user sets {V (1) u } u ∈U (1) , the server determines the complete set of users that survi ve the first round, denoted by S (1) ≜ ∪ u ∈U (1) V (1) u . The server then broadcasts S (1) to all surviving relays, which forward it to their associated survi ving users and request the transmission of the second-round messages. Second round, first hop: T o enable the reco very of the aggre gated masking v ariables at the server , each surviving User ( u, v ) ∈ S (1) transmits X (2) u,v = X ( i,j ) ∈S (1) [ Q i,j ] u,v . (60) Second round, second hop: Since each Relay u has at least V 0 survi ving users in the second round, i.e., |V (2) u | ≥ V 0 , each Relay u arbitrarily selects a subset Q u ⊆ V (2) u with |Q u | = V 0 , and forwards Y (2) u = { X (2) u,v } v ∈Q u , u ∈ [ U ] . (61) 20 W ith the correlated randomness and two-round message transmissions fully specified, we proceed to analyze the scheme’ s achiev able rate, correctness, and information-theoretic security . Rate: W e now specify the communication rates of the proposed scheme. Each user input has length L = U 0 V 0 − T symbols ov er F q . First round rates: Each user transmits L (1) X = U 0 V 0 − T symbols, and each relay forwards L (1) Y = U 0 V 0 − T symbols. Hence, the first-round rates are R (1) X = L (1) X L = 1 , R (1) Y = L (1) Y L = 1 . Second round rates: Each user transmits a single symbol, L (2) X = 1 , and each relay forwards L (2) Y = V 0 symbols. Thus, the second-round rates are R (2) X = L (2) X L = 1 U 0 V 0 − T , R (2) Y = L (2) Y L = V 0 U 0 V 0 − T = 1 U 0 − T V 0 . W ith these rates, the proposed scheme meets both the correctness and security requirements, as we detail in the follo wing analysis. Corr ectness: The server receiv es the second-round messages { Y (2) u } u ∈U (2) , which by (61) satisfy { Y (2) u } u ∈U (2) = {{ X (2) u,v } v ∈Q u } u ∈U (2) , and by (60) further equal {{ P ( i,j ) ∈S (1) [ Q i,j ] u,v } v ∈Q u } u ∈U (2) . Each encoded symbol [ Q i,j ] u,v is defined as [ Q i,j ] u,v ≜ ( N i,j (1) , . . . , N i,j ( U 0 V 0 − T ) , S i,j (1) , . . . , S i,j ( T )) · α u,v , where α = [ α u,v ] ( u,v ) ∈ [ U ] × [ V ] ∈ F U 0 V 0 × U 0 V 0 q is an MDS matrix. By the MDS property , any U 0 V 0 columns of α are linearly independent, ensuring that the server can recov er all random symbols and thus the desired sum P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v . By design, each Relay u ∈ U (2) forwards messages from at least |Q u | ≥ V 0 users, and the server re- cei ves messages from at least |U (2) | ≥ U 0 relays. Therefore, the server obtains at least U 0 V 0 coded symbols of the form P ( i,j ) ∈S (1) [ Q i,j ] u,v . Since these correspond to U 0 V 0 linearly independent columns of the MDS matrix α , the server can uniquely recover the aggregated randomness vector P ( i,j ) ∈S (1) ( N i,j (1) , . . . , N i,j ( U 0 V 0 − T ) , S i,j (1) , . . . , S i,j ( T )) without error . Equiv alently , the serv er recov ers P ( i,j ) ∈S (1) N i,j and P ( i,j ) ∈S (1) S i,j exactly . Finally , combining the recovered aggregated keys with the sum of the first-round messages, P u ∈U (1) Y (1) u (59) = P ( u,v ) ∈S (1) ( W u,v + N u,v ) , the server can subtract the aggregated randomness P ( u,v ) ∈S (1) N u,v decoded in the second round, and hence uniquely recover the desired aggregation P ( u,v ) ∈S (1) W u,v with zero decoding error . Security: Having established the correctness of the aggregation scheme, we no w analyze its security . The system ensures that no relay or server can learn any user’ s input beyond what is allowed, ev en if up to T users collude with them. The security analysis consists of two parts. W e first consider relay security . Relay security: Each relay only observes messages from its associated users. W e show that even if a relay colludes with any set of at most T users, it cannot obtain any information about the inputs of non-colluding users. Recall that M u denotes the set of users associated with relay u , and T denotes the set of colluding users with |T | ≤ T . 21 W e focus on the case where |V (1) u | ≥ U 0 V 0 , for which I  n X (1) u,v o ( u,v ) ∈M u , n X (2) u,v o ( u,v ) ∈V (1) u ; W [ U ] × [ V ]    { W u,v , Z u,v } ( u,v ) ∈T  (62) = H  n X (1) u,v o ( u,v ) ∈M u , n X (2) u,v o ( u,v ) ∈V (1) u    { W u,v , Z u,v } ( u,v ) ∈T  − H  n X (1) u,v o ( u,v ) ∈M u , n X (2) u,v o ( u,v ) ∈V (1) u    W [ U ] × [ V ] , { W u,v , Z u,v } ( u,v ) ∈T  (63) = H { W u,v + N u,v } ( u,v ) ∈M u , ( X ( i,j ) ∈S (1) [ Q i,j ] u,v ) ( u,v ) ∈V (1) u      n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H { N u,v } ( u,v ) ∈M u , ( X ( i,j ) ∈S (1) [ Q i,j ] u,v ) ( u,v ) ∈V (1) u      W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (64) = H  { W u,v + N u,v } ( u,v ) ∈M u \T    n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T  + H X ( i,j ) ∈S (1) N i,j , X ( i,j ) ∈S (1) S i,j      { W u,v + N u,v } ( u,v ) ∈M u \T , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H  { N u,v } ( u,v ) ∈M u \T    W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T  − H X ( i,j ) ∈S (1) N i,j , X ( i,j ) ∈S (1) S i,j      { N u,v } ( u,v ) ∈M u ∪T , W [ U ] × [ V ] , n  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (65) = H  { W u,v + N u,v } ( u,v ) ∈M u \T    n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T  + H X ( i,j ) ∈S (1) N i,j      { W u,v + N u,v } ( u,v ) ∈M u \T , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! + H X ( i,j ) ∈S (1) S i,j      X ( i,j ) ∈S (1) N i,j , { W u,v + N u,v } ( u,v ) ∈M u \T , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H  { N u,v } ( u,v ) ∈M u \T    W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T  − H X ( i,j ) ∈S (1) N i,j      { N u,v } ( u,v ) ∈M u ∪T , W [ U ] × [ V ] , n  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − 22 H X ( i,j ) ∈S (1) S i,j      X ( i,j ) ∈S (1) N i,j , { N u,v } ( u,v ) ∈M u ∪T , W [ U ] × [ V ] , n  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (66) (1) ≤ H  { W u,v + N u,v } ( u,v ) ∈M u \T  + H X ( i,j ) ∈S (1) N i,j ! + H X ( i,j ) ∈S (1) S i,j      X ( i,j ) ∈S (1) N i,j ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T ! | {z } (49) = 0 − H  { N u,v } ( u,v ) ∈M u \T  − I { N u,v } ( u,v ) ∈M u \T ; n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! | {z } (51) = 0 − H X ( i,j ) ∈S (1) N i,j      { N u,v } ( u,v ) ∈M u ∪T , n  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (67) ≤|M u \ T | L + L − |M u \ T | L − L = 0 , (68) where (65) holds because |V (1) u | ≥ U 0 V 0 . Hence, from the collection  P ( i,j ) ∈S (1) [ Q i,j ] u,v  ( u,v ) ∈V (1) u , relay u can decode the aggregated randomness P ( i,j ) ∈S (1) N i,j and P ( i,j ) ∈S (1) S i,j due to the MDS property of the encoding matrix. In (67), the third entropy term H  P ( i,j ) ∈S (1) S i,j    P ( i,j ) ∈S (1) N i,j ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T  equals zero, since P ( i,j ) ∈S (1) S i,j is uniquely determined by P ( i,j ) ∈S (1) N i,j together with the colluding users’ coded symbols, as ensured by the key construction in (49). Moreov er , the mutual information term I  { N u,v } ( u,v ) ∈M u \T ; { N u,v , [ Q i,j ] u,v } ( u,v ) ∈T  (69) is zero due to the T -priv acy property in (51). Finally , the last conditional entrop y term equals L since P ( i,j ) ∈S (1) N i,j is independent of { N u,v } ( u,v ) ∈M u ∪T and  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] , ( u,v ) ∈T . When |V (1) u | < U 0 V 0 , we have I  n X (1) u,v o ( u,v ) ∈M u , n X (2) u,v o ( u,v ) ∈V (1) u ; W [ U ] × [ V ]    { W u,v , Z u,v } ( u,v ) ∈T  (70) ≤ I  n X (1) u,v o ( u,v ) ∈M u , { X (2) u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ; W [ U ] × [ V ]    { W u,v , Z u,v } ( u,v ) ∈T  (68) = 0 . (71) Hence, relay u obtains no information about the users’ messages, and the relay security constraint is satisfied. W e next analyze the server security of the proposed scheme. Server security: The aggregation serv er collects messages from multiple relays. The proposed scheme guarantees that, e ven if the serv er colludes with any set of at most T users, it can only learn the aggreg ation of the surviving users in the first round, and obtains no additional information about the messages of the non-colluding users. I n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈U (1) ; W [ U ] × [ V ]      X ( u,v ) ∈S (1) W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! (72) 23 = H n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈U (1)      X ( u,v ) ∈S (1) W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! − H n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈U (1)      W [ U ] × [ V ] , X ( u,v ) ∈S (1) W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! (73) = H ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] , ( X ( i,j ) ∈S (1) [ Q i,j ] u,v ) ( u,v ) ∈S (1)      X ( u,v ) ∈S (1) W u,v , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H ( X ( u,v ) ∈V (1) u  N u,v  ) u ∈ [ U ] , ( X ( i,j ) ∈S (1) [ Q i,j ] u,v ) ( u,v ) ∈S (1)      W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T  (74) = H ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] \U T      X ( u,v ) ∈S (1) W u,v , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! + H X ( i,j ) ∈S (1) N i,j , X ( i,j ) ∈S (1) S i,j      ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] , X ( u,v ) ∈S (1) W u,v , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H ( X ( u,v ) ∈V (1) u  N u,v  ) u ∈ [ U ] \U T      n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T , W [ U ] × [ V ] ! − H X ( i,j ) ∈S (1) N i,j , X ( i,j ) ∈S (1) S i,j      ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] , W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (75) = H ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] \U T      n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! + H X ( i,j ) ∈S (1) N i,j      ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] , X ( u,v ) ∈S (1) W u,v , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! + H X ( i,j ) ∈S (1) S i,j      X ( i,j ) ∈S (1) N i,j , ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] , X ( u,v ) ∈S (1) W u,v , n W u,v , N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H ( X ( u,v ) ∈V (1) u  N u,v  ) u ∈ [ U ] \U T ! − H X ( i,j ) ∈S (1) N i,j      ( X ( u,v ) ∈V (1) u N u,v ) u ∈ [ U ] , 24 W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! − H X ( i,j ) ∈S (1) S i,j      X ( i,j ) ∈S (1) N i,j , ( X ( u,v ) ∈V (1) u N u,v ) u ∈ [ U ] , W [ U ] × [ V ] , n N u,v ,  [ Q i,j ] u,v  ( i,j ) ∈ [ U ] × [ V ] o ( u,v ) ∈T ! (76) ≤ H ( X ( u,v ) ∈V (1) u  W u,v + N u,v  ) u ∈ [ U ] \U T ! − H ( X ( u,v ) ∈V (1) u N u,v ) u ∈ [ U ] \U T ! (77) = | [ U ] \ U T | − | [ U ] \ U T | = 0 . (78) W e next justify the intermediate steps (75)–(76). First, let U T denote the set of relays that are connected exclusi vely to the colluding users in T . Conditioned on the values { W u,v , Z u,v } ( u,v ) ∈T , the messages transmitted by these relays in U T are completely determined, and thus the y introduce no additional uncertainty . In (75), the second and the fourth entropy terms follo w from the MDS structure of the second-round encoding. Specifically , the collection { Y (2) u } u ∈U (1) allo ws the server to recover only the aggregated randomness P ( i,j ) ∈S (1) N i,j and P ( i,j ) ∈S (1) S i,j . Given the aggregate P ( u,v ) ∈S (1) W u,v , these quantities are independent of the individual users’ messages and thus can be separated as shown. In (76), the second term is zero since P ( i,j ) ∈S (1) N i,j is uniquely determined by { P ( u,v ) ∈V (1) u ( W u,v + N u,v ) } u ∈ [ U ] together with P ( u,v ) ∈S (1) W u,v . The fourth term follows from the T -pri vac y property in (51), which guarantees that { P ( u,v ) ∈V (1) u N u,v } u ∈ [ U ] \U T is independent of the information av ailable to the colluding users. The third term equals the sixth term. When |T | = T , both terms are zero since P ( i,j ) ∈S (1) S i,j is uniquely determined by P ( i,j ) ∈S (1) N i,j and the second-round encoding coefficients kno wn to the colluding users. When |T | < T , the random v ari- able P ( i,j ) ∈S (1) S i,j is independent of P ( i,j ) ∈S (1) N i,j , { P ( u,v ) ∈V (1) u ( W u,v + N u,v ) } u ∈ [ U ] , P ( u,v ) ∈S (1) W u,v , and { W u,v , N u,v , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] } ( u,v ) ∈T . Hence, the third term equals H  P ( i,j ) ∈S (1) S i,j  . Similarly , in the sixth term, P ( i,j ) ∈S (1) S i,j is independent of P ( i,j ) ∈S (1) N i,j , { P ( u,v ) ∈V (1) u N u,v } u ∈ [ U ] , W [ U ] × [ V ] , and { N u,v , { [ Q i,j ] u,v } ( i,j ) ∈ [ U ] × [ V ] } ( u,v ) ∈T . Therefore, the sixth term also equals H  P ( i,j ) ∈S (1) S i,j  . V . C O N V E R S E P R O O F O F T H E O R E M 1 Before presenting the con verse proof, we first establish a basic property that follo ws from the independence be- tween the inputs { W u,v } ( u,v ) ∈ [ U ] × [ V ] and the secret ke ys { Z u,v } ( u,v ) ∈ [ U ] × [ V ] , together with the uniform distribution of { W u,v } ( u,v ) ∈ [ U ] × [ V ] . This property is formalized in the following lemma, which will be repeatedly in vok ed in the subsequent analysis. 25 Lemma 2: For any V 1 ≤ V 2 ≤ V 3 ≤ V , and U 1 < U 2 < U 3 < U , the follo wing equality holds. I X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v ; X ( u,v ) ∈ [ U 3 ] × [ V 3 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 1 ] × [ V 1 ] ! = 0 . (79) Pr oof: I X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v ; X ( u,v ) ∈ [ U 3 ] × [ V 3 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 1 ] × [ V 1 ] ! (1) = I X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v ; X ( u,v ) ∈ [ U 3 ] × [ V 3 ] W u,v , { W u,v } ( u,v ) ∈ [ U 1 ] × [ V 1 ] ! (80) = H X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v ! − I X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v      X ( u,v ) ∈ [ U 3 ] × [ V 3 ] W u,v , { W u,v } ( u,v ) ∈ [ U 1 ] × [ V 1 ] ! (81) = L − H X ( u,v ) ∈ [ U 2 ] × [ V 2 ] W u,v      X ( u,v ) ∈ [ U 3 ] × [ V 3 ] W u,v , { W u,v } ( u,v ) ∈ [ U 1 ] × [ V 1 ] ! (82) = L −  ( U 1 V 1 + 2) L − ( U 1 V 1 + 1) L  = 0 , (83) where in (82) and the last step, we use the uniformity of { W u,v } ( u,v ) ∈ [ U ] × [ V ] . Building on Lemma 2, we are ready to establish the infeasible regime when U 0 V 0 ≤ T in Subsection V -A, and the con verse bounds when U 0 V 0 > T in Subsections V -B, V -C and V -D. A. Infeasibility Pr oof When U 0 V 0 ≤ T W e sho w that when U 0 V 0 ≤ T , the serv er may collude with all survi ving users in the second round. Due to the correctness requirement, the server can recover the sum of the inputs in the first round. On the other hand, the server security constraint requires that the serv er learns nothing be yond this sum. In particular , the serv er should not obtain any information about any subset of the first-round inputs. When U 0 V 0 ≤ T , the server is able to infer information about subsets of the first-round inputs, which contradicts the serv er security constraint. T o see why a contradiction arises, consider the follo wing choice of sets. Let V (1) u = V (2) u = ( u, v ) v ∈ [ V 0 ] for each u , and let U (1) = [ U 0 + 2] , U (2) = [ U 0 ] . Moreov er , define the colluding set of users as T = [ U 0 ] × [ V 0 ] . Note that |T | = U 0 V 0 ≤ T , and hence this choice of V (1) , V (2) , U (1) , and T is feasible under the threat model. From the server security constraint in (10), we have 0 (10) = I { W u,v } ( u,v ) ∈ [ U ] × [ V ] ; n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U 0 +2]      X ( u,v ) ∈ [ U 0 +2] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ! (84) ≥ I X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 +1]      X ( u,v ) ∈ [ U 0 +2] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ! (85) 26 = I X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 +1] , n Y [ U 0 +1] u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈ [ U 0 ] × [ V 0 ]      X ( u,v ) ∈ [ U 0 +2] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ! (86) ≥ I X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 +1] , n Y [ U 0 +1] u o u ∈ [ U 0 ] , X ( u,v ) ∈ [ U 0 +2] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ! − I X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ; X ( u,v ) ∈ [ U 0 +2] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] ! | {z } (79) = 0 (87) ≥ I X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 +1] , n Y [ U 0 +1] u o u ∈ [ U 0 ] ! (88) = H X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v ! − H X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v      n Y (1) u o u ∈ [ U 0 +1] , n Y [ U 0 +1] u o u ∈ [ U 0 ] ! | {z } (8) = 0 (89) = L, (90) ⇒ 0 ≥ L, (91) where { Y [ U 0 +1] u } u ∈ [ U 0 ] denotes the second-hop messages transmitted in the second round. These messages are designed such that the server can recov er the input sum P ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v . Moreov er , by combining the first- round messages { Y (1) u } u ∈ [ U 0 +1] ⊂ { Y (1) u } u ∈ [ U 0 +2] ⊂ { Y (1) u } u ∈ [ U ] with the second-round messages { Y [ U 0 +1] u } u ∈ [ U 0 ] , the serv er is able to reconstruct this sum. In (86), we use the f act that { X (2) u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] is a function of { W u,v , Z u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] as shown in (60), and that { Y [ U 0 +1] u } u ∈ [ U 0 ] is a function of { X (2) u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] as sho wn in (61). Here, we deliberately use the notation { Y [ U 0 +1] u } u ∈ [ U 0 ] instead of { Y (2) u } u ∈ [ U 0 ] to emphasize that these messages are specifically constructed to enable the server to recov er P ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v . The second term in (87) equals zero due to Lemma 2, by setting V 1 = V 2 = V 3 = V 0 , U 1 = U 0 , U 2 = U 0 + 1 , and U 3 = U 0 + 2 in (79). In (89), the first term equals L since the inputs are independent and uniformly distributed, implying that their sum is also uniform. The second term is zero due to the correctness constraint in (8), when V (1) u = V (2) u = { ( u, v ) } v ∈ [ V 0 ] , U (1) = [ U 0 + 1] , and U (2) = [ U 0 ] . In the final step, the inequality 0 ≥ L yields a contradiction. Therefore, the constraints used in the above deriv ation cannot be satisfied simultaneously , and the problem is infeasible when U 0 V 0 ≤ T . 27 B. Con verse for R (1) X ≥ 1 and R (1) Y ≥ 1 When U 0 V 0 > T Intuiti vely , the con verse follows from the fact that the aggregation server must be able to reco ver the sum of all users that survi ve the first round, even if a subset of these users drop out in the second round. From the correctness requirement, the first-round messages must already contain sufficient information to reconstruct the inputs of those users who may potentially drop out later . In particular , since any user surviving the first round may be absent in the second round, the first-round communication must indi vidually con ve y the input of each surviving user . As a result, the first-round message rate must be at least L in order to account for the input of any user that drops out in the second round. W e establish the con verse bound for the first hop message rate in the first round. Fix any ( u ′ , v ′ ) ∈ [ U ] × [ V ] , and define {V (1) u } u ∈U (1) = [ U ] × [ V ] , {V (2) u } u ∈U (2) = ([ U ] × [ V ]) \ { ( u ′ , v ′ ) } , with U (1) = U (2) = [ U ] . From the correctness constraint in (8), we have 0 (8) = H X ( u,v ) ∈ [ U ] × [ V ] W u,v      n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U ] ! (92) ≥ H X ( u,v ) ∈ [ U ] × [ V ] W u,v      n Y (1) u o u ∈ [ U ] , n X (1) u,v o ( u,v ) ∈ [ U ] × [ V ] , n Y (2) u o u ∈ [ U ] , n X (2) u,v o ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } ! (93) (6) , (4) , (7) = H  W u ′ ,v ′   X (1) u ′ ,v ′ , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) }  , (94) where (94) follows since { X (1) u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } and { X (2) u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } are deterministic functions of { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } , as defined in (3) and (6), and { Y (1) u } u ∈ [ U ] and { Y (2) u } u ∈ [ U ] are deterministic functions of { X (1) u,v } ( u,v ) ∈ [ U ] × [ V ] and { X (2) u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) } , respecti vely , as defined in (4) and (7). Next, L (2) = H ( W u ′ ,v ′ ) (1) = H  W u ′ ,v ′   { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) }  (95) = I  W u ′ ,v ′ ; X (1) u ′ ,v ′   { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) }  ] + H  W u ′ ,v ′   X (1) u ′ ,v ′ , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) }  | {z } (94) = 0 (96) ≤ H  X (1) u ′ ,v ′   { W u,v , Z u,v } ( u,v ) ∈ ([ U ] × [ V ]) \{ ( u ′ ,v ′ ) }  (97) ≤ H  X (1) u ′ ,v ′  ≤ L (1) X (98) ⇒ R 1 (11) = L (1) X L ≥ 1 . (99) W e prove the con verse bound for the second hop message rate in the first round. Intuiti vely , the con verse for the second hop follo ws from the fact that the server must be able to recover the aggregate contribution of each relay 28 from the first-round messages, ev en if that relay drops out in the second round. From the correctness requirement, the information transmitted in the first round over the second hop must already suf fice to reconstruct the sum of all users served by any relay that may be absent in the second round. Since an y relay can potentially drop out after the first round, the first-round second-hop message must con ve y at least L bits corresponding to the aggreg ated inputs of the users associated with that relay . Fix any u ′ ∈ [ U ] , and define {V (1) u } u ∈U (1) = [ U ] × [ V ] , {V (2) u } u ∈U (2) = ([ U ] \ { u ′ } ) × [ V ] , with U (1) = [ U ] and U (2) = [ U ] \ { u ′ } . From the correctness constraint in (8), we hav e 0 (8) = H X ( u,v ) ∈ [ U ] × [ V ] W u,v      n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U ] \{ u ′ } ! (100) ≥ H X ( u,v ) ∈ [ U ] × [ V ] W u,v      n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U ] \{ u ′ } , n X (1) u,v o ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] , n X (2) u,v o ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] ! (101) (6) , (4) , (7) = H X ( u,v ) ∈{ ( u ′ ,v ) } v ∈ [ V ] W u,v      Y (1) u ′ , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] ! , (102) where (102) follo ws since { X (1) u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] and { X (2) u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] are deterministic functions of { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] , as defined in (3) and (6), and { Y (1) u } u ∈ [ U ] \{ u ′ } and { Y (2) u } u ∈ [ U ] \{ u ′ } are determin- istic functions of { X (1) u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] and { X (2) u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] , respecti vely , as defined in (4) and (7). Next, L (2) = H X ( u,v ) ∈{ ( u ′ ,v ) } v ∈ [ V ] W u,v ! (103) (1) = H   X ( u,v ) ∈{ ( u ′ ,v ) } v ∈ [ V ] W u,v      { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ]   (104) = I   X ( u,v ) ∈{ ( u ′ ,v ) } v ∈ [ V ] W u,v ; Y (1) u ′      { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ]   + H X ( u,v ) ∈{ ( u ′ ,v ) } v ∈ [ V ] W u,v      Y (1) u ′ , { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] ! | {z } (102) = 0 (105) ≤ H Y (1) u ′      { W u,v , Z u,v } ( u,v ) ∈ ([ U ] \{ u ′ } ) × [ V ] ! (106) ≤ H  Y (1) u ′  ≤ L (1) Y , (107) ⇒ R 1 (11) = L (1) Y L ≥ 1 . (108) 29 C. Con verse for R (2) X ≥ 1 V 0 U 0 − T When U 0 V 0 > T W e prov e the con verse bound for the second-round message rate. Consider the setting where V (1) u = V (2) u = [ V 0 ] , U (1) = [ U 0 + 1] , U (2) = [ U 0 ] , and let T ⊂ [ U 0 ] × [ V 0 ] with |T | = T . Before presenting the formal con verse proof, we first outline the main intuition. The key observ ation is that, due to the server security constraint, the collection consisting of all first-round second-hop messages together with any T second-round first-hop messages is statistically independent of the desired sum. Therefore, these messages cannot contribute any useful information for decoding the sum at the server . As a result, all information required to recover P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v from the first hop must be con ve yed by the remaining U 0 V 0 − T second-round messages. Since the entropy of the desired sum equals L , it follows that, on av erage, each of these remaining messages must carry at least L/ ( U 0 V 0 − T ) symbols of information. The follo wing proof formalizes this intuition using standard information-theoretic inequalities. Specifically , from the server security constraint in (10), we have 0 (10) = I  { W u,v } ( u,v ) ∈ [ U ] × [ V ] ; n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U 0 +1]    X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T  ≥ I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ]      X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! (109) (6) = I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈T      X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! (110) = I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈T , X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! − I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! | {z } (79) = 0 (111) ≥ I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈T ! , (112) where the second term of (111) is zero by applying Lemma 2, with T = [ U 1 ] × [ V 1 ] , V 2 = V 3 = V 0 , U 2 = U 0 , and U 3 = U 0 + 1 . Next, consider V (1) = V (2) = [ V 0 ] and U (1) = U (2) = [ U 0 ] . From the correctness constraint (8), we hav e L (2) = I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v   (113) = H   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v      n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈ [ U 0 ]   | {z } (8) = 0 30 + I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈ [ U 0 ]   (114) ≤ I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈ [ U 0 ] × [ V 0 ]   (115) (7) = I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈ [ U 0 ] × [ V 0 ]   (116) = I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n X (2) u,v o ( u,v ) ∈ ([ U 0 ] × [ V 0 ]) \T      n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈T   + I X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n X (2) u,v o ( u,v ) ∈T ! | {z } (112) = 0 (117) ≤ H  n X (2) u,v o ( u,v ) ∈ ([ U 0 ] × [ V 0 ]) \T  (118) ≤ X ( u,v ) ∈ ([ U 0 ] × [ V 0 ]) \T H  X (2) u,v  ≤ ( V 0 U 0 − T ) L (2) X , (119) ⇒ R 2 (11) = L (2) X L ≥ 1 V 0 U 0 − T . (120) W e next justify the steps in the abov e deriv ation. (113) follows from the definition of the input size in (2). The conditional entropy term in the first term of (114) is zero due to the correctness constraint in (8), since V (1) u = V (2) u = { ( u, v ) } v ∈ [ V 0 ] and U (1) = U (2) = [ U 0 ] , which guarantees that P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v can be recov ered from { Y (1) u } u ∈ [ U 0 ] and { Y (2) u } u ∈ [ U 0 ] . (115) follo ws since adding side information cannot decrease mutual information. Equality (116) holds because { Y (2) u } u ∈ [ U 0 ] is a deterministic function of { X (2) u,v } ( u,v ) ∈ [ U 0 ] × [ V 0 ] according to (7). D. Lower bound for R (2) Y ≥ 1 U 0 −⌊ T /V 0 ⌋ When U 0 V 0 > T W e prove the con verse bound for the second-round message rate. Consider the setting V (1) u = V (2) u = [ V 0 ] , U (1) = [ U 0 + 1] , U (2) = [ U 0 ] , and let [ ⌊ T /V 0 ⌋ ] × [ V 0 ] ⊆ T with |T | = T . Before presenting the formal lo wer bound proof, we briefly explain the main idea. Due to the server security constraint, once the server is given the messages and keys corresponding to an y set T of size T , the desired sum becomes statistically independent of all first-round second-hop messages and of a subset of second-round second-hop messages. In particular , by choosing T such that [ ⌊ T /V 0 ⌋ ] × [ V 0 ] ⊆ T , the second-round messages { Y (2) u } u ∈ [ ⌊ T /V 0 ⌋ ] carry no useful information for decoding the desired sum. Consequently , in the second hop, only the remaining U 0 − ⌊ T /V 0 ⌋ second-round messages can con ve y information about P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v . Since the entropy of the desired sum equals L , this yields the conservati ve lower bound R (2) Y ≥ L/ ( U 0 − ⌊ T /V 0 ⌋ ) . W e remark that this bound may not be tight, as it follo ws from a worst-case integer 31 partition of T across users; a potentially tighter bound of the form L/ ( U 0 − T /V 0 ) is suggested by symmetry , but establishing such a result would require fundamentally dif ferent techniques. Then, from the server security constraint (10), we have 0 (10) = I { W u,v } ( u,v ) ∈ [ U ] × [ V ] ; n Y (1) u o u ∈ [ U ] , n Y (2) u o u ∈ [ U 0 +1]      X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T ! ≥ I  X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ]    X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T  (121) (7) = I  X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈  T V 0     X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T  (122) = I  X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈  T V 0  , X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T  − I  X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; X ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v , { W u,v , Z u,v } ( u,v ) ∈T  | {z } (79) = 0 (123) ≥ I  X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈  T V 0   , (124) where (122) holds because [ ⌊ T /V 0 ⌋ ] × [ V 0 ] ⊆ T . Hence, for all u ∈  T V 0  , the second-round symbols { X (2) u,v } v ∈ [ V 0 ] are completely determined by { W u,v , Z u,v } ( u,v ) ∈T , which are already conditioned upon. As a result, the corre- sponding second-round messages { Y (2) u } u ∈  T V 0  are deterministic functions of the conditioning variables and can be added to the mutual information without affecting its value. The second term in (123) is equal to zero by Lemma 2. Specifically , by setting T = [ U 1 ] × [ V 1 ] with V 2 = V 3 = V 0 , U 2 = U 0 , and U 3 = U 0 + 1 , the desired sum P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v is independent of P ( u,v ) ∈ [ U 0 +1] × [ V 0 ] W u,v and of { W u,v , Z u,v } ( u,v ) ∈T . Next, consider V (1) = V (2) = [ V 0 ] and U (1) = U (2) = [ U 0 ] . From the correctness constraint (8), we hav e L (8) = H   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v   (125) = H   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v      n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈ [ U 0 ]   | {z } (8) = 0 (126) + I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈ [ U 0 ]   (127) = H X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈  T V 0  ! | {z } (124) = 0 32 + I   X ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v ; n Y (2) u o u ∈ [ U 0 ] \  T V 0       n Y (1) u o u ∈ [ U 0 ] , n Y (2) u o u ∈  T V 0    (128) ≤ H n Y (2) u o u ∈ [ U 0 ] \  T V 0  ! (129) ≤ X u ∈ [ U 0 ] \  T V 0  H  Y (2) u  ≤ ( U 0 −  T V 0  ) L (2) Y , (130) ⇒ R (2) Y (11) = L (2) Y L ≥ 1 U 0 −  T V 0  . (131) where the first term in (127) is zero due to the correctness constraint in (8). Specifically , under the setting V (1) u = V (2) u = { ( u, v ) } v ∈ [ V 0 ] and U (1) = U (2) = [ U 0 ] , the desired sum P ( u,v ) ∈ [ U 0 ] × [ V 0 ] W u,v can be reliably recovered from { Y (1) u } u ∈ [ U 0 ] and { Y (2) u } u ∈ [ U 0 ] . Hence, conditioning on these messages leav es no residual uncertainty . V I . C O N C L U S I O N In this paper , we studied information-theoretic hierarchical secure aggregation under user and relay dropouts with collusion constraints. W e established correctness and security guarantees in a two-round hierarchical model and characterized the communication cost for most links. While tight results were obtained in sev eral regimes, a gap remains in the second-round relay-to-serv er communication. Several directions remain open. A primary question is to close the gap in the second-round communication. It is also of interest to extend the model to structured collusion settings, such as group-wise security constraints. Another important direction is to characterize the optimal ke y rate and understand the tradeoff between shared randomness and communication under dropout. Further extensions include heterogeneous network settings with asymmetric user distributions and more realistic dropout models. These directions may provide a deeper understanding of the fundamental limits of hierarchical secure aggregation. R E F E R E N C E S [1] J. Konecn ` y, H. B. McMahan, F . X. Y u, P . Richt ´ arik, A. T . Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency , ” arXiv preprint , vol. 8, 2016. [2] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y . Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data, ” in Pr oceedings of the 20th International Confer ence on Artificial Intelligence and Statistics , ser . Proceedings of Machine Learning Research, A. Singh and J. Zhu, Eds., vol. 54. PMLR, 20–22 Apr 2017, pp. 1273–1282. [Online]. A vailable: https://proceedings.mlr .press/v54/mcmahan17a.html [3] P . Kairouz and H. B. McMahan, “ Advances and open problems in federated learning, ” F oundations and T r ends in Machine Learning , vol. 14, no. 1-2, pp. 1–210, 06 2021. [Online]. A vailable: https://doi.org/10.1561/2200000083 [4] T . Li, A. K. Sahu, A. T alwalkar , and V . Smith, “Federated learning: Challenges, methods, and future directions, ” IEEE Signal Pr ocessing Magazine , vol. 37, no. 3, pp. 50–60, 2020. [5] T . Y ang, G. Andrew , H. Eichner , H. Sun, W . Li, N. Kong, D. Ramage, and F . Beaufays, “ Applied federated learning: Improving google keyboard query suggestions, ” arXiv preprint , 2018. 33 [6] K. Bonawitz, V . Iv anov , B. Kreuter , A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data, ” arXiv pr eprint arXiv:1611.04482 , 2016. [7] ——, “Practical secure aggregation for priv acy-preserving machine learning, ” in pr oceedings of the 2017 A CM SIGSA C Confer ence on Computer and Communications Security , 2017, pp. 1175–1191. [8] Y . Zhao and H. Sun, “Secure summation: Capacity region, groupwise key , and feasibility , ” IEEE T ransactions on Information Theory , 2023. [9] X. Zhang, K. W an, H. Sun, S. W ang, M. Ji, and G. Caire, “Optimal communication and key rate region for hierarchical secure aggregation with user collusion, ” IEEE T ransactions on Information Theory , vol. 72, no. 2, pp. 1030–1050, 2026. [10] X. Zhang, Z. Li, K. W an, H. Sun, M. Ji, and G. Caire, “Fundamental limits of hierarchical secure aggregation with cyclic user association, ” 2025. [Online]. A vailable: https://tinyurl.com/mtj4zhvt [11] Z. Li, X. Zhang, J. Lv , J. Fan, H. Chen, and G. Caire, “Collusion-resilient hierarchical secure aggre gation with heterogeneous security constraints, ” 2025. [Online]. A vailable: https://arxiv .org/abs/2507.14768 [12] X. Zhang, K. W an, H. Sun, S. W ang, M. Ji, and G. Caire, “Optimal communication and key rate region for hierarchical secure aggregation with user collusion, ” arXiv preprint , 2024. [13] X. Zhang, Z. Li, K. W an, H. Sun, M. Ji, and G. Caire, “Fundamental limits of hierarchical secure aggregation with cyclic user association, ” arXiv preprint , 2025. [14] M. Egger , C. Hofmeister , A. W achter-Zeh, and R. Bitar , “Priv ate aggre gation in hierarchical wireless federated learning with partial and full collusion, ” IEEE T ransactions on Information Theory , vol. 71, no. 11, pp. 8977–8992, 2025. [15] Q. Lu, J. Cheng, W . Kang, and N. Liu, “Capacity of hierarchical secure coded gradient aggregation with straggling communication links, ” arXiv preprint , 2024. [16] S. W eng, X. Zhang, Y . Zhao, G. Caire, M. Xiao, and M. Skoglund, “On resilient and efficient linear secure aggre gation in hierarchical federated learning, ” arXiv preprint , 2026. [17] Y . Zhao and H. Sun, “Information theoretic secure aggregation with user dropouts, ” IEEE T ransactions on Information Theory , v ol. 68, no. 11, pp. 7471–7484, 2022. [18] J. So, C. He, C.-S. Y ang, S. Li, Q. Y u, R. E Ali, B. Guler , and S. A vestimehr , “Lightsecagg: a lightweight and versatile design for secure aggregation in federated learning, ” Pr oceedings of Machine Learning and Systems , vol. 4, pp. 694–720, 2022. [19] T . Jahani-Nezhad, M. A. Maddah-Ali, S. Li, and G. Caire, “Swiftagg: Communication-efficient and dropout-resistant secure aggregation for federated learning with worst-case security guarantees, ” in 2022 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2022, pp. 103–108. [20] Z. Zhang, J. Liu, K. W an, H. Sun, M. Ji, and G. Caire, “On secure aggregation with uncoded groupwise keys against user dropouts and user collusion, ” IEEE T ransactions on Information Theory , 2025. [21] Z. Liu, J. Guo, K.-Y . Lam, and J. Zhao, “Efficient dropout-resilient aggregation for priv acy-preserving machine learning, ” IEEE T ransactions on Information F orensics and Security , vol. 18, pp. 1839–1854, 2022. [22] Z. Li, Y . Zhao, and H. Sun, “W eakly secure summation with colluding users, ” in 2023 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2023, pp. 2398–2403. [23] ——, “W eakly secure summation with colluding users, ” IEEE T ransactions on Information Theory , 2025. [24] X. Zhang, Z. Li, S. Li, K. W an, D. W . K. Ng, and G. Caire, “Information-theoretic decentralized secure aggregation with collusion resilience, ” arXiv preprint , 2025. [25] Z. Li, X. Zhang, Y . Zhao, H. Chen, J. Fan, and G. Caire, “The capacity of collusion-resilient decentralized secure aggregation with groupwise keys, ” arXiv preprint , 2025. [26] Z. Li, X. Zhang, and G. Caire, “Optimal key rates for decentralized secure aggregation with arbitrary collusion and heterogeneous security constraints, ” arXiv preprint , 2025. 34 [27] T . Jahani-Nezhad, M. A. Maddah-Ali, S. Li, and G. Caire, “Swiftagg+: Achieving asymptotically optimal communication loads in secure aggregation for federated learning, ” IEEE Journal on Selected Areas in Communications , vol. 41, no. 4, pp. 977–989, 2023. [28] X. Zhang, K. W an, H. Sun, S. W ang, M. Ji, and G. Caire, “Optimal rate region for key efficient hierarchical secure aggregation with user collusion, ” in 2024 IEEE Information Theory W orkshop (ITW) , 2024, pp. 573–578. [29] M. Egger , C. Hofmeister , A. W achter-Zeh, and R. Bitar , “Priv ate aggre gation in hierarchical wireless federated learning with partial and full collusion, ” 2024. [Online]. A vailable: https://arxiv .org/abs/2306.14088 [30] X. Zhang, Z. Li, K. W an, H. Sun, M. Ji, and G. Caire, “Communication-efficient hierarchical secure aggregation with cyclic user association, ” in 2025 IEEE International Symposium on Information Theory (ISIT) , 2025, pp. 1–6. [31] M. Egger, C. Hofmeister, A. W achter-Zeh, and R. Bitar , “Priv ate aggregation in wireless federated learning with heterogeneous clusters, ” in 2023 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2023, pp. 54–59. [32] M. Ben-Or , S. Goldwasser , and A. Wigderson, “Completeness Theorems for Non-Cryptographic Fault-T olerant Distributed Computa- tion, ” in Proceedings of the twentieth annual ACM symposium on Theory of computing . A CM, 1988, pp. 1–10. [33] D. Chaum, C. Cr ´ epeau, and I. Damgard, “Multiparty Unconditionally Secure Protocols, ” in Pr oceedings of the twentieth annual ACM symposium on Theory of computing . A CM, 1988, pp. 11–19. [34] B. Chor and E. Kushile vitz, “ A communication-priv acy tradeoff for modular addition, ” Information Processing Letters , vol. 45, no. 4, pp. 205–210, 1993. [35] E. Kushile vitz and A. Ros ´ en, “ A randomness-rounds tradeoff in priv ate computation, ” SIAM J ournal on Discr ete Mathematics , v ol. 11, no. 1, pp. 61–80, 1998. [36] C. E. Shannon, “Communication Theory of Secrecy Systems, ” Bell System T echnical Journal , vol. 28, no. 4, pp. 656–715, 1949. [37] H. Sun and S. A. Jafar , “The Capacity of Symmetric Priv ate Information Retriev al, ” IEEE T ransactions on Information Theory , vol. 65, no. 1, pp. 322–329, 2019. [38] Z. Jia, H. Sun, and S. A. Jafar , “Cross Subspace Alignment and the Asymptotic Capacity of X–Secure T –Private Information Retriev al, ” IEEE T ransactions on Information Theory , vol. 65, no. 9, pp. 5783–5798, 2019. [39] T . Guo, R. Zhou, and C. Tian, “On the information leakage in priv ate information retrieval systems, ” IEEE T ransactions on Information F or ensics and Security , vol. 15, pp. 2999–3012, 2020. [40] J. Cheng, N. Liu, and W . Kang, “The capacity of symmetric priv ate information retriev al under arbitrary collusion and eav esdropping patterns, ” arXiv preprint , 2020. [41] Q. W ang, H. Sun, and M. Skoglund, “The ϵ -Error Capacity of Symmetric PIR with Byzantine Adversaries, ” in 2018 IEEE Information Theory W orkshop (ITW) . IEEE, 2018, pp. 1–5. [42] ——, “Symmetric Private Information Retriev al with Mismatched Coded Messages and Randomness, ” in 2019 IEEE International Symposium on Information Theory (ISIT) . IEEE, 2019, pp. 365–369. [43] X. Zhang, K. W an, H. Sun, M. Ji, and G. Caire, “On the fundamental limits of cache-aided multiuser priv ate information retriev al, ” IEEE T ransactions on Communications , vol. 69, no. 9, pp. 5828–5842, 2021. [44] K. Bonawitz, V . Iv anov , B. Kreuter , A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data, ” arXiv pr eprint arXiv:1611.04482 , 2016. [45] ——, “Practical Secure Aggregation for Priv acy-Preserving Machine Learning, ” in Pr oceedings of the 2017 A CM SIGSA C Confer ence on Computer and Communications Security , 2017, pp. 1175–1191. [46] J. H. Bell, K. A. Bonawitz, A. Gasc ´ on, T . Lepoint, and M. Raykov a, “Secure Single-Server Aggregation with (Poly) Logarithmic Overhead, ” in Pr oceedings of the 2020 ACM SIGSA C Conference on Computer and Communications Security , 2020, pp. 1253–1269. [47] J. So, B. G ¨ uler , and A. S. A vestimehr , “T urbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning, ” IEEE Journal on Selected Areas in Information Theory , vol. 2, no. 1, pp. 479–489, 2021. [48] S. Kadhe, N. Rajaraman, O. O. Koyluoglu, and K. Ramchandran, “FastSecAgg: Scalable Secure Aggregation for Priv acy-Preserving Federated Learning, ” arXiv preprint , 2020. 35 [49] K. Bonawitz, F . Salehi, J. K one ˇ cn ` y, B. McMahan, and M. Gruteser , “Federated learning with autotuned communication-efficient secure aggregation, ” in 2019 53r d Asilomar Conference on Signals, Systems, and Computers . IEEE, 2019, pp. 1222–1226. [50] B. Choi, J. yong Sohn, D.-J. Han, and J. Moon, “Communication-Computation Efficient Secure Aggregation for Federated Learning, ” arXiv preprint arXiv:2012.05433 , 2020. [51] K. Pillutla, S. M. Kakade, and Z. Harchaoui, “Robust aggregation for federated learning, ” arXiv preprint , 2019. [52] R. Xu, N. Baracaldo, Y . Zhou, A. Anwar , and H. Ludwig, “Hybridalpha: An efficient approach for pri v acy-preserving federated learning, ” in Proceedings of the 12th ACM W orkshop on Artificial Intelligence and Security , 2019, pp. 13–23. [53] C. Beguier and E. W . Tramel, “Safer: Sparse secure aggregation for federated learning, ” arXiv pr eprint arXiv:2007.14861 , 2020. [54] J. So, B. G ˜ A¼ler , and A. S. A vestimehr , “Byzantine-Resilient Secure Federated Learning, ” IEEE J ournal on Selected Ar eas in Communications , vol. 39, no. 7, pp. 2168–2181, 2021. [55] A. R. Elkordy and A. S. A vestimehr , “Secure aggregation with heterogeneous quantization in federated learning, ” arXiv preprint arXiv:2009.14388 , 2020. [56] J. Guo, Z. Liu, K.-Y . Lam, J. Zhao, Y . Chen, and C. Xing, “Secure weighted aggregation in federated learning, ” arXiv preprint arXiv:2010.08730 , 2020. [57] A. B. Alexandru and G. J. Pappas, “Priv ate weighted sum aggregation, ” arXiv preprint , 2020. [58] D. Lia and M. T ogan, “Priv acy-preserving machine learning using federated learning and secure aggregation, ” in 2020 12th International Confer ence on Electronics, Computers and Artificial Intelligence (ECAI) . IEEE, 2020, pp. 1–6. [59] N. Truong, K. Sun, S. W ang, F . Guitton, and Y . Guo, “Priv acy preserv ation in federated learning: Insights from the gdpr perspectiv e, ” arXiv preprint arXiv:2011.05411 , 2020.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment