Decision-Scaled Scenario Approach for Rare Chance-Constrained Optimization
Chance-constrained optimization is a suitable modeling framework for safety-critical applications where violating constraints is nearly unacceptable. The scenario approach is a popular solution method for these problems, due to its straightforward im…
Authors: Jaeseok Choi, An, Deo
Decision-Scaled Scenario Approac h for Rare Chance-Constrained Optimization ∗ Jaeseok Choi 1 , Anand Deo 2 , Constan tino Lagoa 3 , and Anirudh Subraman yam 1 1 Departmen t of Industrial and Manufacturing Engineering, P ennsylv ania State Univ ersity {jxc6747, subramanyam}@psu.edu 2 Decision Sciences, Indian Institute of Managemen t anand.deo@iimb.ac.in 3 Sc ho ol of Electrical Engineering and Computer Science, P ennsylv ania State Universit y cml18@psu.edu Abstract Chance-constrained optimization is a suitable modeling framework for safet y-critical appli- cations where violating constraints is nearly unacceptable. The scenario approach is a p opular solution metho d for these problems, due to its straightforw ard implementation and ability to preserv e problem structure. How ever, in the rare-ev ent regime where constraint violations must b e k ept extremely unlikely , the scenario approach becomes computationally infeasible due to the excessiv ely large sample sizes it demands. W e address this limitation with a new yet straight- forw ard decision-scaling metho d that relies exclusively on original data samples and a single scalar h yp erparameter that scales the constrain ts in a wa y amenable to standard solvers. Our metho d lev erages large deviation principles under mild nonparametric assumptions satisfied by commonly used distribution families in practice. F or a broad class of problems satisfying cer- tain practically verifiable structural assumptions, the metho d ac hieves a p olynomial reduction in sample size requiremen ts compared to the classical scenario approach, while also guaranteeing asymptotic feasibilit y in the rare-even t regime. Numerical exp eriments spanning finance and engineering applications show that our decision-scaling metho d significantly expands the scop e of problems that can b e solved both efficien tly and reliably . Keyw ords: chance-constrained optimization, scenario approac h, rare ev ents, large deviation principle Mathematics Sub ject Classification: 90C15, 60F10, 65C05 1 In tro duction Rare but catastrophic ev ents–suc h as the CO VID-19 pandemic and the 2025 Iberian P eninsula blac kout–highligh t the critical need for decision framew orks that balance safet y with efficiency in engineering and risk managemen t [3, 32]. Chance-Constrained Optimization (CCO), in tro duced b y Charnes et al. [18], pro vides a structured framework for incorp orating safet y requirements into decision-making. ∗ Jaeseok Choi and Anirudh Subramany am ackno wledge supp ort b y the U.S. National Science F oundation under Gran t DMS-2229408. Constan tino Lagoa ac knowledges supp ort by the U.S. National Science F oundation under Grant ECCS-2317272. 1 W e consider the following general CCO form ulation: minimize x ∈X c ( x ) sub ject to P ξ ( { z : g ( x , z ) ≤ 0 } ) ≥ 1 − ε. (CCP ε ) Here, x ∈ R n is the decision v ector restricted to a deterministic close d c onvex set X ⊆ R n , c : R n → R is a c onvex cost function, and ε ∈ (0 , 1) is a prescrib ed risk tolerance. The uncer- tain ty is captured b y a random vector ξ : Ω → Ξ ⊆ R m defined on a probabilit y space (Ω , F , P ) . Its probabilit y distribution, denot ed b y P ξ , is supp orted on the set Ξ 1 ; we emphasize that our metho d do es not require knowledge of the full distributional form, only certain prop erties ab out its tail b eha vior which w e formalize later. The constrain t function g : X × Ξ → R enco des p er- formance requiremen ts, where g ( · , z ) is close d and c onvex for an y fixed z and g ( x , · ) is Borel measurable and c ontinuous for an y fixed x . The form ulation accommo dates join t c hance con- strain ts of the form P ξ ( { z : g 1 ( x , z ) ≤ 0 , g 2 ( x , z ) ≤ 0 , . . . , g K ( x , z ) ≤ 0 } ) ≥ 1 − ε by defining g ( x , z ) = max i =1 ,...,K g i ( x , z ) . The ob jectiv e is to minimize c ( x ) while ensuring that the decision x satisfies the p erformance requiremen t g ( x , z ) ≤ 0 with probability at least 1 − ε ; equiv alently , the probabilit y of constraint violation must not exceed ε . This pap er specifically fo cuses on the rare-even t regime where ε is exceptionally small, typically b elo w 10 − 3 . Such stringent reliabilit y requirements arise naturally in many safety-critical appli- cations. F or example, structural engineering demands failure probabilities b elow 10 − 4 for critical infrastructures [25]; p o wer grids target maximum loss-of-load probabilities of roughly 10 − 4 [42]; and financial institutions main tain default rates b elo w 10 − 3 [64]. CCOs ha ve b een successfully applied across div erse domains, including pow er systems [6], emer- gency resp onse design [5], p ortfolio optimization [10], and h umanitarian logistics [28]. Despite their broad applicability , solving these problems remains computationally intractable in man y practical settings. The fundamen tal difficulty stems from the chance constraint itself: ev aluating if a can- didate solution x satisfies P ξ ( { z : g ( x , z ) ≤ 0 } ) ≥ 1 − ε requires computing a multi-dimension al in tegral o v er P ξ , whic h admits closed-form solutions only for highly specialized com binations of constrain t functions and probability distributions [1, 34]. Moreo ver, even when g ( · , z ) is con vex for eac h z , the feasible region defined by the c hance constraint is t ypically non-conv ex, precluding the use of mo dern conv ex optimization algorithms [44]. Indeed, the general CCO problem is known to b e NP-hard [39]. Giv en these complexities, muc h of the existing research has focused on sample-based appro xima- tion methods with t wo primary approac hes: Sample A verage Approximation (SAA) and Scenario Approac h (SA). The former appro ximates the c hance constraint in (CCP ε ) by its empirical coun- terpart. While SAA solutions conv erge to the true solution as sample size increases [38, 45], the metho d faces significant practical limitations: the resulting problem remains non-con vex, and the in- dicator functions enco ding constrain t violations often necessitate mixed-in teger reformulations [39], substan tially increasing computational burden. Although [30, 46] hav e prop osed smo othed SAA metho ds using contin uous approximations, conv exity still cannot be guaran teed. In con trast, the SA, based on earlier results on statistical learning theory (e.g., see T empo et al. [56], Vidy asagar [59]) and later adapted for con vex optimization problems b y Calafiore and Campi [11], offers a more tractable alternativ e. Rather than approximating the probabilit y in (CCP ε ), SA replaces the c hance constraint with a finite set of deterministic constrain ts, g x , z ( j ) ≤ 0 , j = 1 , 2 , . . . , N , (1) 1 The supp ort Ξ is the smallest closed set with P ξ (Ξ) = 1 . 2 enforced simultaneously for the N samples z (1) , z (2) , . . . , z ( N ) dra wn from P ξ . This formulation preserv es con vexit y: when g ( · , z ) is con vex for eac h z , the resulting scenario problem remains con vex and th us computationally tractable. F urthermore, for a given risk tolerance lev el ε and sampling confidence β , explicit b ounds on the required sample size ha ve b een derived [11, 12, 13, 14]. W e formally review these b ounds later; for now, w e highlight that these b ounds pro vide distribution-fr e e guaran tees on solution feasibility that hold regardless of the underlying probabilit y distribution. A ma jor dra wbac k of the SA, ho wev er, is that the required sample size N scales as ε − 1 to ensure c hance constrain t feasibilit y . This requirement becomes computationally prohibitiv e for safet y-critical applications where the allo wed risk level ε is typically smaller than 10 − 3 . Indeed, since each sample in tro duces a constraint in the optimization problem, larger sample sizes directly translate to increased computational complexity . F or instance, to guarantee a maxim um risk of ε = 0 . 1% with 1 − β = 95% confidence, a standard sample size b ound from [16] requires solving an optimization problem with 7,992 constraints, ev en when the problem features only a single decision v ariable. This requiremen t b ecomes prohibitive for large systems with stringent risk requirements and limited computational resources, suc h as real-time mo del predictive control and other edge applications with constrained hardw are. Motiv ated by these c hallenges, this pap er develops a sample-efficien t SA for the rare-ev ent regime while main taining theoretical feasibilit y guaran tees. Our cen tral research question is whether we can lev erage the tail prop erties of P ξ and structure of g to reduce sample complexity without restricting applicabilit y . T o address this question, w e propose a no vel de cision-sc ale d SA that exploits the in terplay b et ween the tail b eha vior of P ξ and the asymptotic structure of g . Our metho d replaces the classical SA constrain ts (1) with the following scaled constrain ts: g s − γ x , z ( j ) ≤ 0 , j = 1 , 2 , . . . , N s , (2) where s ≥ 1 is a tunable hyperparameter and γ = 0 is a constan t determined b y the asymptotic structure of g . Our main result pro vides an explicit b ound for N s that scales as ε − (1 /s α ) , where α > 0 is a certain tail index of the distribution P ξ , which is kno wn for common distributions or can b e reliably estimated from data. The b ound for N s dep ends only on the problem dimension n , risk lev el ε , confidence lev el β , h yp erparameter s and the tail index α . Notably , this b ound is orders of magnitude smaller than the sample size required b y the classical SA. W e sho w that when the constrain t g exhibits a practically v erifiable asymptotic structure and the distribution P ξ satisfies certain nonparametric tail conditions, solving the scenario problem using the scaled constraints (2) guaran tees chance-constrain t feasibilit y for all sufficiently small ε . Our main con tributions can b e summarized as follows. 1. P olynomial reduction in sample complexity . Our decision-scaled SA ac hieves a p olyno- mial reduction in the required sample size compared to classical SA, from O ( ε − 1 ) to O ( ε − 1 /s α ) for any s > 1 , while pro viding asymptotic feasibilit y guarantees for (CCP ε ). This impro vemen t is realized through a simple scaling of the decision vector requiring no sp ecialized algorithmic mac hinery or distribution-sp ecific tuning. 2. Wide applicability through mild structural conditions. Roughly speaking, our ap- proac h requires only that the constrain t function g ( x , z ) b e asymptotically homogeneous (a prop ert y satisfied by broad classes of constrain ts including linear, quadratic, p osynomial, and other nonlinear forms) and that P ξ ha ve a certain p olynomial tail deca y in log-scale. This latter condition is satisfied b y many distributions encountered in practice and can also b e v erified using standard tec hniques from extreme v alue theory . These mild assumptions enable us to establish a uniform large deviation principle for chance constrain ts in the rare-ev ent regime. 3 3. Numerical v alidation. W e v alidate our decision-scaled SA through n umerical exp erimen ts on p ortfolio optimization, structural engineering, and norm optimization b enc hmarks, whic h feature linear, nonlinear, and joint c hance constraint structures, resp ectiv ely . In these test cases, our metho d ac hieves substan tial computational reductions compared to classical SA while guaran teeing feasibility . Open-source implementations are provided. 1.1 Related Literature V arious approac hes hav e b een recently dev elop ed to address the computational challenges of solving CCOs, particularly in the rare-ev ent regime [55]. W e categorize these metho ds and position our con tribution within the existing landscap e. Sev eral w orks ha v e adapted Imp ortance Sampling (IS) concepts to improv e efficiency in ev aluat- ing the chance constraints in (CCP ε ). Early work b y Rubinstein [51, 52] established IS techniques for rare ev en ts in optimization con texts. Nemirovski and Shapiro [43] introduced sample scaling via ma jorizing distributions for SA, although finding suitable distributions for a giv en application re- mains challenging. Barrera et al. [4] com bined SAA with IS to uniformly reduce the required sample size across all feasible decisions. Ho wev er, their approach relies on sp ecific problem structures, such as independent Bernoulli distributed uncertaint y , to derive a suitable IS distribution. Blanchet et al. [9] prop osed conditional IS for heavy-tailed distributions in SA, requiring analytically computable constrain t approximations. Domain-sp ecific IS metho ds like Luk ashevich et al. [40] exploit structure in sp ecific p ow er systems optimization problems. Recen t works lev erage Large Deviation Principles (LDPs) for rare-ev ent CCOs. [57] reformulated c hance constraints as bilevel problems, eliminating sampling but requiring the uncertain ty to follow Gaussian mixture distributions. [8] deriv ed asymptotic relationships b et w een CCO and SA under mild assumptions similar to ours, without addressing practical sample complexit y or algorithmic implemen tation. Similarly , Deo and Murthy [24] c haracterized the scaling b ehavior of optimal v alues in the rare-ev ent regime, fo cusing on theoretical prop erties rather than computational methods. Our earlier work [20] introduced constrain t scaling for linear constrain ts to reduce sample complexit y of SA. This pap er extends that idea by scaling decision v ariables instead of constrain ts, enabling application to general nonlinear constrain ts. Sev eral tec hniques improv e SA efficiency without sp ecifically targeting rare even ts. Campi and Garatti [15], Romao et al. [50] dev elop ed constraint remo v al metho ds that impro ve p erformance of SA by ac hieving less conserv ative solutions, rather than reducing sample complexity . Carè et al. [17], Schildbac h et al. [53] prop osed metho ds to reduce the required sample size of SA b y exploit- ing problem structure or domain kno wledge. Ho wev er, these approac hes often require iterativ e algorithms or are tailored to sp ecific applications suc h as mo del predictiv e control. In con trast to the aforementioned works, our decision-scaling metho d pro vides a general recip e for ac hieving polynomial sample reduction for broad classes of problems without requiring distribution- sp ecific tuning, analytical approximations, or domain-specific knowledge. Outline. The rest of the pap er is organized as follows. Section 2 provides technical preliminaries on tail mo deling of distributions, follo wed b y our main assumptions on the uncertain ty distribu- tion P ξ and the constrain t function g . Section 3 introduces our prop osed decision-scaled SA and presen ts our main theoretical contributions. Section 4 provides systematic and practical v erification pro cedures for our assumptions along with illustrative examples. Section 5 demonstrates the effec- tiv eness of our metho d through exp eriments on three b enc hmark problems: p ortfolio optimization, reliabilit y-based column design, and norm optimization. All proofs are deferred to Section 6 to main tain readability of the main exp osition. 4 Notations. Scalars are denoted b y plain t yp e ( a ), vectors b y b oldface ( a ), and sets by calligraphic script ( A ). W e write R > 0 : = (0 , ∞ ) and R ≥ 0 : = [0 , ∞ ) . The zero vector is 0 and the v ector of ones is 1 ; their dimensions will b e clear from con text. F or sets A and B , their Cartesian pro duct is A × B , and their Minko wski sum is A + B : = { a + b : a ∈ A , b ∈ B } . The closure, interior, and complement of A are denoted b y A , A ◦ , and A c , resp ectiv ely . Unless stated otherwise, ∥ · ∥ denotes the ℓ 2 -norm. The compact ball of radius θ > 0 centered at c is B θ ( c ) : = { x : ∥ x − c ∥ ≤ θ } , with B θ : = B θ ( 0 ) . Giv en a function f : R n → R , the a -sup erlev el set of f restricted to B θ is L θ ≥ a ( f ) : = { x ∈ B θ : f ( x ) ≥ a } and the corresp onding strict sup erlev el set is L θ >a ( f ) : = { x ∈ B θ : f ( x ) > a } . When θ = ∞ , we write L ≥ a ( f ) and L >a ( f ) , resp ectiv ely . The notation { z u } denotes a collection of vectors parameterized b y u ∈ R > 0 . W e write lim u →∞ z u = z if and only if lim k →∞ z u k = z for every sequence { u k } k ∈ N ⊂ R > 0 satisfying lim k →∞ u k = ∞ . 2 Preliminaries and Assumptions 2.1 Regular V ariation Definition 2.1. A function f : R > 0 → R > 0 is said to b e r e gularly varying with index κ ∈ R > 0 2 if lim u →∞ f ( uz ) f ( u ) = z κ , ∀ z > 0 . (3) W e write f ∈ R V ( κ ) , or simply f ∈ R V when the index is not explicitly needed. Regular v ari- ation characterizes functions with p olynomial-lik e asymptotic b ehavior. Examples of such functions include z κ , z κ log(1 + z ) , and z κ exp p log(1 + z ) . Definition 2.2. A function f : Z → R ≥ 0 , where Z ⊆ R m is a cone, is said to b e multivariate r e gularly varying if there exists h ∈ R V and a con tinuous function f ∗ : Z → R ≥ 0 suc h that lim u →∞ f ( u z u ) h ( u ) = f ∗ ( z ) (4) for ev ery conv ergen t sequence { z u } → z . W e write f ∈ M R V ( Z , h, f ∗ ) or simply f ∈ M R V . Here, h and f ∗ enco de the asymptotic radial scaling and directional dep endence of f , resp ectiv ely . W e refer the reader to Resnick [48, Chapter 2 and 5] for more details. 2.2 Assumptions on the Uncertain ty Distribution The multiv ariate extension of regular v ariation provides a natural framework for characterizing the tail b eha vior of P ξ . Assumption 2.3. The probabilit y distribution P ξ admits a density function of the form exp( − Q ( z )) where Q ∈ M R V (Ξ , q , λ ) with q ∈ R V ( α ) for some α > 0 . A dditionally , λ ( z ) > 0 for all z ∈ Ξ ∩ { z ′ ∈ R m : ∥ z ′ ∥ = 1 } . R emark 2.4 . Implicit in Assumption 2.3 is that the support set Ξ must b e a closed cone. The closedness follows from the definition of support while the conic prop erty arises from the multiv ariate regular v ariation of Q . 2 Regular v ariation can p ermit κ ∈ R in general. 5 In tuitively , Assumption 2.3 states that the negativ e log-density Q ( u z ) b eha ves lik e q ( u ) λ ( z ) for large u and unit z , where q go verns ho w rapidly Q grows as one mov es a wa y from the origin, while λ acts as a directional co efficien t mo dulating this growth. W e emphasize that this assumption is nonparametric in nature. This is in contrast to existing literature that often relies on specific para- metric distribution families [4, 57]. It imp oses only a mild structural condition on the distribution tail, requiring that the log-density exhibits p olynomial b eha vior asymptotically . Imp ortan tly , this assumption do es not require the exact functional form of the distribution but only its tail b ehavior, eliminating the need for normalization constants that are often intractable to compute. The rate of the b eha vior is gov erned b y the tail index α > 0 , where a larger α > 0 corresp onds to a ligh ter tail (i.e., faster decay of tail probabilities). This general framew ork encompasses a wide range of commonly used distributions and their mixtures, including ligh t-tailed families like the Gaussian ( α = 2 ) and Exp onential ( α = 1 ), as w ell as sub-exp onen tial families like the W eibull ( α < 1 ). This framework is flexible enough to accommo date non-parametric tail correlations, as the de- p endence structure betw een marginal densit y is implicitly encoded in λ . This includes v arious dep endences mo deled by copulas, suc h as the Gum b el, Cla yton, and other Arc himedean types; see Deo and Murthy [23, EC.3] for further details on such constructions. W e emphasize that a significant adv antage of our prop osed decision-scaled SA is that it do es not require their explicit distributional form ulation. Our metho d’s implemen tation relies solely on α , whic h characterizes the tail deca y rate of the marginal distributions and is in v ariant to the underlying dep endence structure. This prop ert y mak es our approac h truly practical and nonparametric; from the implementation p ersp ectiv e, distributions with the same tail index, such as Chi and Gaussian ( α = 2 ), are treated identically . The tail index α is often a w ell-known parameter for common distributions. T able 1 provides several examples of distribution families and their corresponding tail indices. The practicalit y of our metho d is further enhanced by the fact that α can b e reliably estimated from data without fitting an entire (parameteric) distribution to the data [26, 27, 58]. T able 1. Examples of distributions satisfying Assumption 2.3 Distribution family 1 T ail index α Chi-squared, Erlang, Exp onen tial, Gamma, In verse Gaussian, Laplace 1 Chi, Gaussian, Gaussian mixtures, Maxw ell–Boltzmann, Rayleigh 2 Generalized-gamma with shap e parameter k k Generalized gaussian with shap e parameter k k W eibull with shap e parameter k k 1 Unless explicitly sp ecified (e.g., multiv ariate normal, Gaussian mixtures), these families are constructed from the corresp onding univ ariate marginals, often assuming indep endence or a sp ecified copula. 2.3 Assumptions on the Constrain t F unction W e remind the reader that the deterministic feasible set X is closed and conv ex, and the constraint function g : X × Ξ → R is suc h that g ( · , z ) is closed and conv ex for an y fixed z . T o c haracterize its asymptotic structure, we first in tro duce the follo wing concept of a rate-parameterized asymptotic cone. 6 Definition 2.5 (Asymptotic Cone) . F or a nonempt y set A and θ = 0 , an asymptotic c one of A with rate θ is defined as A ∞ θ : = y : ∃{ a u } ⊂ A , ∃{ λ u } ⊂ R > 0 s.t. λ u → ∞ and a u ( λ u ) θ → y . (5) Definition 2.5 unifies t w o fundamen tal geometric concepts in optimization. Sp ecifically , when θ > 0 , A ∞ θ coincides with the horizon cone [49, Definition 3.3], capturing the asymptotic directions along whic h the set A extends to infinity . Con v ersely , when θ < 0 , A ∞ θ corresp onds to the tangen t cone of A at the origin [49, Definition 6.1]. In either case, the set A ∞ θ forms a cone characterizing the asymptotic geometry of A under the scaling regime gov erned by θ . Notably , for any closed cone A , we ha ve A ∞ θ = A for all θ = 0 . In particular, under Assump- tion 2.3, since the supp ort set Ξ is a closed cone (Remark 2.4), w e obtain Ξ ∞ θ = Ξ for any θ = 0 . This unified framew ork allo ws us to consisten tly analyze both the un b ounded and infinitesimal prop erties of the feasible region dep ending on the problem structure. Assumption 2.6. The constraint function g satisfies the following prop erties: (A1) There exist constan ts γ = 0 , ρ ≥ 0 , and a contin uous function g ∗ : X ∞ γ × Ξ → R such that, for ev ery y ∈ X ∞ γ and w ∈ Ξ , whenever sequences { x u } ⊂ X and { z u } ⊂ Ξ satisfy lim u →∞ x u u γ = y and lim u →∞ z u u = w , (6) the follo wing limit holds: lim u →∞ g ( x u , z u ) u ρ = g ∗ ( y , w ) . (7) (A2) X ∞ γ \ { 0 } = ∅ ; (A3) g ∗ ( y , 0 ) < 0 for all y ∈ X ∞ γ \ { 0 } ; (A4) { w ∈ Ξ : g ∗ ( y , w ) > 0 } = ∅ for all y ∈ X ∞ γ \ { 0 } ; (A5) If γ < 0 , then g ( 0 , z ) ≤ 0 for all z ∈ Ξ . Assumption (A1) requires the constraint function to exhibit an asymptotic homogeneity , char- acterized by the limit function g ∗ . When the decision and random v ectors scale b y factors u γ and u resp ectively , the constraint function scales by u ρ , ensuring the limit remains nondegenerate. The sign of γ enco des the problem structure, consisten t with Definition 2.5. A p ositiv e γ indicates that decisions gro w with uncertain ty , while a negativ e γ indicates that decisions shrink as uncertaint y gro ws. Assumption (A2) ensures that the asymptotic cone X ∞ γ is non-trivial. Assumption (A3) ensures that asymptotically negligible uncertaint y does not trigger constrain t violations; in other w ords, the system remains safe when uncertain ty v anishes. Con versely , Assumption (A4) preven ts trivialit y b y guaranteeing that ev ery non-zero decision direction carries inherent risk in the asymptotic limit. Notably , when γ < 0 , Assumption (A2) and the closedness of the feasible set X imply that 0 ∈ X . This guarantees that g ( 0 , z ) is well-defined, allowing Assumption (A5) to further ensure that the origin is feasible for all uncertain ty realizations. These conditions, while technical, encompass a broad class of practical constraints and exclude pathological cases. V erification procedures and examples are pro vided in Section 4. F or no w, 7 w e simply note that the scaling exponents ( γ , ρ ) directly gov ern the decision-scaled SA that w e in tro duce in the next section. Their uniqueness is therefore essential to ensure that the metho d is w ell-defined and yields unambiguous sample complexit y bounds. The follo wing result establishes this uniqueness. Prop osition 2.7 (Uniqueness of Scaling) . Under Assumptions (A1) to (A4), the p air ( γ , ρ ) is unique. 3 Decision-Scaled SA 3.1 Ov erview of Classical SA Let { z ( j ) } N j =1 denote N indep enden t identically distributed (i.i.d.) samples, or scenarios, dra wn from P ξ . The SA approximates (CCP ε ) b y solving the following so-called scenario problem: minimize x ∈X c ( x ) sub ject to g x , z ( j ) ≤ 0 , j = 1 , . . . , N . (SP N ) This problem is con vex since g ( x , z ) is con vex in x for every fixed z . Definition 3.1. F or a giv en x ∈ X , the violation probability is defined as V ( x ) : = P ξ ( { z : g ( x , z ) > 0 } ) . Consequen tly , the condition V ( x ) ≤ ε ensures that x satisfies the feasibilit y requiremen t of (CCP ε ). Let x ∗ N denote an optimal solution of the scenario problem (SP N ). The central result of the SA is to establish the sample requirement that ensures feasibility with resp ect to the original chance constrain t. Theorem 3.2 ([16], Theorem 1) . Given a risk toler anc e level ε ∈ (0 , 1) and a c onfidenc e p ar ameter β ∈ (0 , 1) , cho ose N ≥ N ( ε, β ) : = 2 ε log 1 β + n . (8) If the sc enario pr oblem (SP N ) admits an optimal solution x ∗ N , then P N ξ ( V ( x ∗ N ) ≤ ε ) ≥ 1 − β , wher e P N ξ is the N -fold pr o duct pr ob ability distribution. Theorem 3.2 establishes that, with probability at least 1 − β o ver sample realizations, the optimal solution x ∗ N satisfies the feasibility requirement for (CCP ε ). Notably , this guarantee is a priori —it holds indep enden tly of the sp ecific sample realizations of the uncertaint y and is distribution-fr e e , main taining v alidit y regardless of the underlying probability distribution. Ho wev er, this generality comes at a computational cost. In particular, for applications where ε is exceptionally small, t ypically 10 − 3 to 10 − 5 , the required sample size scales as 1 /ε . F or instance, ac hieving ε = 10 − 3 with 99% confidence ( β = 0 . 01 ) requires N ≥ 11 , 211 samples, ev en with only a single decision v ariable ( n = 1 ). Since eac h sample adds a new constrain t in (SP N ), the resulting optimization problem explo des in size and b ecomes intractable. 8 3.2 Decision-scaled SA The decision-scale SA addresses the computational intractabilit y of classical SA in the rare-ev ent regime. Throughout this section, we assume that the distribution P ξ satisfies Assumption 2.3 with tail index α > 0 , and the constraint function g satisfies Assumption 2.6 with parameters ( γ , ρ ) . Our approac h is parameterized by a sc aling hyp erp ar ameter s ≥ 1 , and is formulated as follows: minimize x ∈X c ( x ) sub ject to g s − γ x , z ( j ) ≤ 0 , j = 1 , . . . , N , (SSP N ,s ) where { z ( j ) } N j =1 are i.i.d. samples drawn from P ξ . Here, we choose N ≥ N ε 1 /s α , β = 2 ε 1 /s α log 1 β + n . (9) In contrast to existing methods [9, 43] that incur the computational ov erhead of constructing IS distributions, our approach is easy to implement as it relies exclusiv ely on samples from the original distribution P ξ . When s = 1 , problem (SSP N ,s ) reduces to classical SA. F or s > 1 , the k ey insigh t is that scaling the decision v ectors b y s − γ tigh tens eac h constraint, effectively restricting the feasible region. In tuitively , this conserv atism allo ws for few er samples while main taining feasibilit y guarantees. The follo wing theorem formalizes this intuition. Theorem 3.3. Fix any s ≥ 1 and β ∈ (0 , 1) . L et { x ε } b e a se quenc e of optimal solutions to (SSP N ,s ) . Then, P ∞ ξ lim inf ε → 0 log V ( x ε ) log ε ≥ 1 ≥ 1 − β , (10) wher e P ∞ ξ is the infinite pr o duct pr ob ability distribution. The probabilit y statement in Theorem 3.3 is defined on the space of infinite sequences of i.i.d. samples drawn from P ξ . This guarantees that, with probabilit y at least 1 − β , the stated asymptotic b eha vior (10) holds for any realization of the infinite sample path. While problem (SSP N ,s ) ma y b ecome infeasible for certain sample realizations, Theorem 3.3 implicitly considers only cases where { x ε } is w ell-defined for sufficiently small ε . Otherwise, as with the classical SA, no conclusion can b e provided. T o build in tuition, note that since ε ∈ (0 , 1) w e hav e log ε < 0 , so the condition log V ( x ε ) / log ε ≥ 1 is equiv alen t to V ( x ε ) ≤ ε . Thus, the theorem states that the violation probabilit y of the decision- scaled solution x ε ev entually decays at least as fast as the prescrib ed risk level ε . In other words, our metho d produces solutions that are asymptotically feasible for (CCP ε ) as the risk tolerance shrinks to zero. More precisely , the limit inferior in (10) implies that for any δ > 0 , there exists ε 0 suc h that log V ( x ε ) / log ε ≥ 1 − δ for all ε ∈ (0 , ε 0 ) . Equiv alen tly , V ( x ε ) ≤ ε 1 − δ , ∀ ε < ε 0 . (11) While the b ound in (11) is slightly weak er than V ( x ε ) ≤ ε , the fact that δ can b e made arbitrarily close to 0 mak es this difference negligible in practice. The pro of relies on establishing an LDP for the violation probabilit y V ( x ε ) . The k ey step in volv es deriving a rate function that captures the exp onential deca y of this probability , determined b y the tail index α of P ξ and the asymptotic structure of g . The detailed theoretical dev elopment is pro vided in Section 6.4. 9 An immediate consequence of Theorem 3.3 is the follo wing polynomial reduction in sample complexit y: Corollary 3.4. Comp ar e d to the classic al SA, the de cision-sc ale d SA achieves a p olynomial r e duc- tion in sample c omplexity fr om O ( ε − 1 ) to O ( ε − 1 /s α ) for obtaining a fe asible solution to (CCP ε ) when ε is sufficiently smal l. This corollary follo ws directly from the asymptotic relationship: lim ε → 0 log N ( ε s − α , β ) log N ( ε, β ) = 1 s α . (12) The reduction in sample complexity translates directly to computational sa vings. In the SA frame- w ork, the num b er of samples equals the n umber of constraints in the optimization problem, thus a reduction in sample size b y a factor of s α yields prop ortional computational time reduction. Con- sider the example from Section 3.1 ( ε = 10 − 3 , β = 0 . 01 , n = 1 ). As noted previously , the classical SA requires N ≥ 11 , 211 samples. In contrast, our decision-scaled approach with s = 1 . 2 for α = 2 requires only N ≥ 1 , 359 samples, represen ting an eight-fold reduction. Moreo ver, this efficiency gain is particularly v aluable in applications where samples are limited or computationally exp ensiv e to generate, suc h as sampling from high-dimensional distributions. While Theorem 3.3 holds for all s ≥ 1 , the choice of s in tro duces a trade-off. Our empirical findings in Section 5 show that increasing s yields more conserv ativ e solutions with larger feasibilit y margins, although detailed analysis of this observ ation is b eyond the scop e of this paper and will be in vestigated in future work. This trade-off b et ween computational efficiency and solution qualit y pro vides flexibilit y . Practitioners can select larger scaling factors s for real-time applications or limited computational resources. This tunabilit y enables solving previously intractable problems b y adjusting s to matc h av ailable computational budget. 4 V erification of Regularit y Conditions and Examples The aforemen tioned theoretical guarantees rely on Assumption 2.6, whic h requires the constraint function to exhibit a certain asymptotic homogeneity . In this section, we develop a systematic and practical procedure to verify this assumption. W e first fo cus on the broad class of algebr aic constrain t functions; that is, functions expressible as finite sums of monomial terms in the decision and uncertain ty v ectors, which encompasses p olynomials, p osynomials, and signomials commonly arising in practice. F or this class, w e sho w that v erifying the required assumption is relatively straigh tforward. W e then extend it to handle non-algebraic constrain t functions as w ell as joint c hance constraints. Throughout, w e illustrate the framework through concrete examples. 4.1 V erification of Assumption 2.6 for Algebraic F unctions Let a = ( a 1 , . . . , a n ) ∈ R n and b = ( b 1 , . . . , b n ) ∈ R m denote real exponents. W e adopt the standard m ulti-index notation x a : = n Y i =1 x a i i and z b : = m Y i =1 z b i i . (13) W e consider constrain t functions of the form g ( x , z ) = X ( a , b ) ∈J C a , b x a z b , (14) 10 where J : = { ( a , b ) ∈ R n × R m : C a , b = 0 } is a finite index set with co efficients C a , b ∈ R . Here, the function g in (14) is defined on a domain X × Ξ suc h that every term x a z b is w ell-defined and real-v alued for all ( x , z ) ∈ X × Ξ . This class encompasses p olynomials, p osynomials, and signomials. In Assumption 2.6, v erifying condition (A1) is the most c hallenging. The remaining condi- tions (A2) to (A5) are t ypically straightforw ard to chec k once scaling exponents ( γ , ρ ) and the limit function g ∗ are identified. T o mak e (A1) more accessible, we decomp ose its v erification into t wo steps: (i) Iden tify the scaling exp onen ts ( γ , ρ ) and the limit function g ∗ ; (ii) V erify that the conv ergence in (A1) holds. P art (i) . The key observ ation is that condition (A1) implies a sp ecific asymptotic structure of the constrain t function g . T o systematically iden tify the scaling exponents ( γ , ρ ) and the limit function g ∗ , we examine the canonical choice of sequences x u = u γ y and z u = u w for ( y , w ) ∈ X ∞ γ × Ξ . This heuristic reduces limit (7) to the following form: lim u →∞ g ( u γ y , u w ) u ρ = g ∗ ( y , w ) . (15) Substituting x = u γ y and z = u w in (14) yields g ( u γ y , u w ) = X ( a , b ) ∈J u p a , b ( γ ) C a , b y a w b , (16) where the exp onen t p a , b ( γ ) is defined as p a , b ( γ ) : = γ 1 ⊤ a + 1 ⊤ b . (17) F or the limit function g ∗ to be w ell-defined (i.e., finite) and non-trivial (i.e., not iden tically zero), the scaling exp onen ts ρ ≥ 0 must equal the maximum scaling rate among all terms, i.e., ρ = max ( a , b ) ∈J p a , b ( γ ) . The limit function g ∗ then consists of those terms ac hieving this maximum: g ∗ ( y , w ) = X ( a , b ) ∈J ,p a , b ( γ )= ρ C a , b y a w b . (18) The v alue of γ is determined by requiring that g ∗ satisfies conditions (A3) and (A4). Sp ecifically , γ m ust b e c hosen such that at least t wo terms achiev e the maximum scaling rate, which is necessary to sim ultaneously satisfy conditions (A3) and (A4) 3 . Algorithm 1 systematically identifies the unique pair ( γ , ρ ) and the corresponding limit function g ∗ . The first part of the algorithm enumerates candidate v alues of γ that equalize the scaling rates of distinct terms. It then immediately terminates on line 20 after finding the first candidate that satisfies the conditions of Assumption 2.6, since its uniqueness is guaranteed by Prop osition 2.7. If no candidate satisfies all requirements, the algorithm returns None , indicating that the constraint function g does not admit the structure required b y Assumption 2.6; see Remark 4.1 for further discussion. 3 If a single term dominates, then g ∗ b ecomes a monomial. In this case, if the exp onen t of w is non-zero, then g ∗ ( y , 0 ) = 0 which violates (A3); otherwise, the exp onen t of w is zero and g ∗ b ecomes indep endent of w , failing to sim ultaneously satisfy b oth the strictly negativ e condition of (A3) and the strictly positive condition of (A4). 11 Algorithm 1 Identification of ( γ , ρ ) and g ∗ for algebraic functions Require: The finite index set J and coefficients C a , b defining g as in (14). Ensure: ( γ , ρ ) and g ∗ , or None . 1: Initialize a candidate set of γ , Γ ← ∅ . 2: for all { ( a , b ) , ( a ′ , b ′ ) } ⊂ J do 3: if 1 ⊤ a = 1 ⊤ a ′ then 4: Solv e for γ ← 1 ⊤ ( b ′ − b ) 1 ⊤ ( a − a ′ ) . 5: Γ ← Γ ∪ { γ } 6: end if 7: end for 8: for all γ ∈ Γ do 9: if γ = 0 then 10: con tinue 11: end if 12: ρ ← max ( a , b ) ∈J { p a , b ( γ ) } 13: if ρ < 0 then 14: con tinue 15: end if 16: J ∗ ← { ( a , b ) ∈ J : p a , b ( γ ) = ρ } 17: if |J ∗ | ≥ 2 then 18: g ∗ ( y , w ) ← P ( a , b ) ∈J ∗ C ab y a w b . 19: if g ∗ : X ∞ γ × Ξ → R is con tinuous; X ∞ γ satisfies (A2); g ∗ satisfies (A3) and (A4); g satisfies (A5) then 20: return ( γ , ρ ) , g ∗ . 21: end if 22: end if 23: end for 24: return None . 12 R emark 4.1 . Not all constraint functions of the form (14) satisfy Assumption 2.6. This assumption c haracterizes problems that admit a w ell-defined asymptotic structure in which a unique scaling relationship go verns the dominan t terms. Consider g ( x, z ) = xz − x + z − 1 . If γ > 0 , then only the term xz dominates with scaling exp onen t γ + 1 , whereas if γ < 0 , then only the term z dominates with scaling exp onen t 1 . Since no nonzero v alue of γ yields a balance of tw o or more terms, Algorithm 1 returns None , indicating that Assumption 2.6 is not satisfied b y this constraint function. R emark 4.2 . The sequence x u = u γ y implicitly emplo yed in Algorithm 1 need not lie within the feasible set X for all u . Ho wev er, based on the observ ation that the scaling exp onen ts ( γ , ρ ) and the limit function g ∗ c haracterize the asymptotic structure of the function g itself, Algorithm 1 serv es as a heuristic to iden tify these parameters. The v alidit y with resp ect to all other feasible sequences is discussed b elo w. P art (ii) . Algorithm 1 iden tifies the scaling exp onents ( γ , ρ ) and the contin uous limit function g ∗ b y v erifying that the limit in (7) holds for the canonical sequences x u = u γ y and z u = u w for every ( y , w ) ∈ X ∞ γ × Ξ . Ho wev er, condition (A1) requires that the conv ergence in (7) holds for every pair of sequences { x u } ⊂ X and { z u } ⊂ Ξ satisfying (6), not just the canonical choice. While verifying this requiremen t for all such sequences is generally impractical, w e demonstrate that for the class of algebraic functions defined in (14), this strong con v ergence prop ert y is inherent. The follo wing prop osition establishes that the successful iden tification of ( γ , ρ, g ∗ ) via Algorithm 1 is sufficien t to guarantee the conv ergence required by (A1). Prop osition 4.3. Supp ose the c onstr aint function g ( x , z ) is of the form (14) . If A lgorithm 1 r eturns a tuple ( γ , ρ, g ∗ ) , then Assumption 2.6 is satisfie d. Illustrativ e Examples. Below, w e illustrate that Assumption 2.6 encompasses representativ e instances from div erse classes of constraint functions common in CCO. These classes include linear, quadratic [36], polynomial [33, 35] and p osynomial constrain ts found in geometric programming [29, 37]. Example 4.4 (Linear Constrain t) . Consider the function g ( x , z ) = − a ⊤ x + b ⊤ z + ℓ defined on the feasible region X = R n ≥ 0 and supp ort set Ξ = R m ≥ 0 . With strictly p ositive co efficient v ectors a ∈ R n > 0 and b ∈ R m > 0 , this function satisfies Assumption 2.6 with exp onen ts ( γ , ρ ) = (1 , 1) , yielding the limit function g ∗ ( y , w ) = − a ⊤ y + b ⊤ w . This structure arises in resource allo cation [38] and in ven tory managemen t [63] problems. Example 4.5 (Bilinear Constrain t) . Let g ( x , z ) = x ⊤ A z − ℓ where A ∈ R n × m is a non-zero matrix and ℓ > 0 is a scalar co efficien t. W e define the support set as Ξ = R m and the feasible set X ⊂ R n as a compact set con taining the origin. This function satisfies Assumption 2.6 with scaling exp onen ts ( γ , ρ ) = ( − 1 , 0) and the limit function g ∗ ( y , w ) = y ⊤ A w − ℓ . This formulation frequen tly app ears in V alue-at-Risk constrain ts [9] and robust classification [60]. Example 4.6 (Quadratic Constrain t) . Consider the quadratic form g ( x , z ) = − x ⊤ A x + z ⊤ B z + ℓ defined on X = R n ≥ 0 and Ξ = R m ≥ 0 . Assuming A ∈ R n × n is positive definite and B ∈ R m × m is p ositiv e semidefinite, this function satisfies Assumption 2.6 with ( γ , ρ ) = (1 , 2) . The corresp onding limit function is g ∗ ( y , w ) = − y ⊤ A y + w ⊤ B w . Applications include p ortfolio optimization [45] and p o w er system planning [47]. Example 4.7 (High-order Polynomial Constraint) . Consider a constrain t with high-order nonlin- earities defined b y g ( x , z ) = P n i =1 z 2 i x 3 i − ℓ , with ℓ > 0 . Let X ⊂ R n ≥ 0 b e a compact set con taining 13 the origin and Ξ = R n . This function satisfies Assumption 2.6 with exp onen ts ( γ , ρ ) = ( − 2 / 3 , 0) , resulting in the limit function g ∗ ( y , w ) = P n i =1 w 2 i y 3 i − ℓ . While the aforemen tioned examples are often directly amenable to optimization, the functions ma y sometimes b e non-conv ex. The follo wing result provides a w ay to address this issue while preserving the scaling structure. Prop osition 4.8 (Inv ariance to multiplication) . Supp ose ther e exists a function h : X × Ξ → R > 0 that satisfies c ondition (A1) with ( γ , ρ h ) and a strict p ositive limit function h ∗ ( y , w ) . Then, for any function f : X × Ξ → R satisfying Assumption 2.6 with ( γ , ρ ) and limit function f ∗ ( y , w ) , the pr o duct g ( x , z ) : = h ( x , z ) f ( x , z ) (19) satisfies Assumption 2.6 with ( γ , ρ + ρ h ) and limit function g ∗ ( y , w ) : = h ∗ ( y , w ) f ∗ ( y , w ) . Prop osition 4.8 implies that the parameter γ is inv arian t under multiplication b y some strictly p ositiv e function h ( x , z ) . Since the decision-scaled (SSP N ,s ) dep ends exclusively on γ , its form ula- tion remains iden tical under suc h p ositiv e multiplicativ e transformations of the constraint function. The practical implication of this result is significant. Since h is strictly p ositiv e, the feasible regions { z : f ( x , z ) > 0 } and { z : g ( x , z ) > 0 } coincide. Therefore, we can work with g instead of f b y choosing an appropriate h that ensures conv exit y or other desired properties. Notably , h need only satisfy condition (A1) of Assumption 2.6, not the entire assumption, which pro vides additional flexibilit y in implementation. Example 4.9 (Illustration of Prop osition 4.8) . Consider the chance constrain t P ξ ( { z : f ( x , z ) ≤ 0 } ) ≥ 1 − ε, where the constrain t function f ( x , z ) = x ⊤ z − 1 1 + ∥ x ∥ 2 (20) is defined on X = B R for some R > 0 and Ξ = R n . This function satisfies Assumption 2.6 with ( γ , ρ ) = ( − 1 , 0) and limit function f ∗ ( y , w ) = y ⊤ w − 1 . Ho wev er, the mapping x 7→ f ( x , z ) is noncon v ex, violating our preliminary conditions and p osing computational challenges. T o address this, m ultiply by h ( x , z ) = 1 + ∥ x ∥ 2 , (21) whic h is strictly p ositiv e on X and satisfies condition (A1) with ( γ , ρ h ) = ( − 1 , 0) and h ∗ ( y , w ) = 1 . Note that since only (A1) is required, this choice is not unique and can b e adjusted as needed. Then, g ( x , z ) = f ( x , z ) h ( x , z ) = x ⊤ z − 1 (22) satisfies Assumption 2.6 with ( γ , ρ + ρ h ) = ( − 1 , 0) and limit function g ∗ = f ∗ . Crucially , the m ultiplication induces conv exit y of g in x while preserving γ . 14 4.2 Extension via Constrain t Decomp osition W e now extend our framework to constrain t functions that do not strictly adhere to the algebraic form in (14), suc h as those inv olving trigonometric, logarithmic, and exp onen tial terms. This is ac hieved via a de c omp osition that separates a dominan t algebraic comp onen t from an asymptotically negligible residual. Sp ecifically , consider a function decomp osed as g ( x , z ) = g 0 ( x , z ) + h ( x , z ) . (23) Here, g 0 represen ts a dominant term satisfying the algebraic structure in (14). W e assume that applying Algorithm 1 to g 0 yields exp onen ts ( γ , ρ ) and limit function g ∗ 0 , thereb y satisfying As- sumption 2.6. The term h represents a residual component that is asymptotic al ly ne gligible relative to the scaling of g 0 . F ormally , we require that for an y y ∈ X ∞ γ and w ∈ Ξ , and for an y sequences { x u } ⊂ X and { z u } ⊂ Ξ suc h that lim u →∞ x u /u γ = y and lim u →∞ z u /u = w , lim u →∞ h ( x u , z u ) u ρ = 0 . (24) Under this condition, the limit of the original function g is determined solely b y the dominant term: lim u →∞ g ( x u , z u ) u ρ = lim u →∞ g 0 ( x u , z u ) u ρ + lim u →∞ h ( x u , z u ) u ρ = g ∗ 0 ( y , w ) . (25) Consequen tly , g satisfies Assumption 2.6 with the same parameters ( γ , ρ ) and limit function g ∗ = g ∗ 0 . This implies that the applicabilit y of our framework relies on the constraint’s asymptotic b eha vior rather than its strict algebraic form. Th us, for non-algebraic functions, we can verify applicability b y examining only the dominant algebraic comp onen t g 0 . W e illustrate this through the following examples. Example 4.10. Consider the function defined on X = R n ≥ 0 and Ξ = R n ≥ 0 : g ( x , z ) = log n X i =1 exp( − x i + z i ) ! , (26) where x = ( x 1 , . . . , x n ) ⊤ and z = ( z 1 , . . . , z n ) ⊤ . It can b e decomp osed in to g ( x , z ) = max i =1 ,...,n ( − x i + z i ) | {z } = g 0 ( x , z ) + log n X i =1 exp − x i + z i − max i =1 ,...,n ( − x i + z i ) ! | {z } h ( x , z ) (27) The dominant term g 0 satisfies Assumption 2.6 with ( γ , ρ ) = (1 , 1) and limit function g ∗ 0 ( y , w ) = max i =1 ,...,n ( − y i + w i ) . Since 0 ≤ h ( x , z ) ≤ log n , the residual h ( x , z ) is asymptotically negligible. Therefore, g satisfies Assumption 2.6 with ( γ , ρ ) = (1 , 1) and g ∗ = g ∗ 0 . Constrain t functions of this t yp e, kno wn as log-sum-exp functions, app ear in control [19] and structural system reliability [2]. Example 4.11. Consider a constrain t function inv olving trigonometric, logarithmic, and exp onen- tial terms on X = R n ≥ 0 and Ξ = R m ≥ 0 : g ( x , z ) = − a ⊤ x + b ⊤ z + sin( z ⊤ z ) + log (1 + exp( −∥ z ∥ )) , (28) 15 where a ∈ R n > 0 and b ∈ R m > 0 are co efficien t vectors. This function admits the follo wing decomp osi- tion: g ( x , z ) = − a ⊤ x + b ⊤ z | {z } = g 0 ( x , z ) + sin( z ⊤ z ) + log (1 + exp( −∥ z ∥ )) | {z } = h ( x , z ) . (29) In this case, g 0 satisfies Assumption 2.6 with ( γ , ρ ) = (1 , 1) and limit function g ∗ 0 ( y , w ) = − a ⊤ y + b ⊤ w , while h is asymptotically negligible. Consequen tly , g satisfies Assumption 2.6 with ( γ , ρ ) = (1 , 1) and g ∗ = g ∗ 0 . Practical applications of such constraints include structural top ology optimiza- tion [21] and tra jectory planning [61]. 4.3 Extension to Join t Chance Constraints The applicabilit y of our framework extends naturally to join t chance constraints of the form P ξ ( { z : g i ( x , z ) ≤ 0 , ∀ i = 1 , . . . , K } ) ≥ 1 − ε, (30) where K ∈ N . This can b e equiv alen tly reformulated as a single c hance constraint using the function g ( x , z ) : = max i =1 ,...,K g i ( x , z ) . (31) The following lemma establishes that if the individual constraints g i , i = 1 , . . . , K , satisfy Assump- tion 2.6, the constrain t g does as well. Prop osition 4.12. Supp ose that e ach function g i , i = 1 , . . . , K , satisfies Assumption 2.6 with the same sc aling exp onents ( γ , ρ ) and limit function g ∗ i . Then, the function g ( x , z ) = max i =1 ,...,K g i ( x , z ) (32) satisfies Assumption 2.6 with p ar ameters ( γ , ρ ) and the limit function g ∗ ( y , w ) : = max i =1 ,...,K g ∗ i ( y , w ) . (33) Example 4.13 (Linear Joint Chance Constraint) . Consider a join t chance constraint of the form P ξ ( { z : A x ≥ B z } ) ≥ 1 − ε , where A ∈ R K × n and B ∈ R K × m are coefficient matrices. Let the feasible region b e X = R n ≥ 0 and the supp ort b e Ξ = R m ≥ 0 . The condition A x ≥ B z requires satisfying K linear inequalities simultaneously: ( B z − A x ) i ≤ 0 , ∀ i = 1 , . . . , K . W e can reform ulate the problem using a single constrain t function g ( x , z ) := max i =1 ,...,K g i ( x , z ) , where eac h comp onen t is defined as g i ( x , z ) = − a ⊤ i x + b ⊤ i z with a ⊤ i and b ⊤ i denoting the i -th ro ws of A and B , resp ectiv ely . Since eac h g i satisfies Assumption 2.6 with exp onen ts ( γ , ρ ) = (1 , 1) and limit function g ∗ i ( y , w ) = − a ⊤ i y + b ⊤ i w , Prop osition 4.12 guarantees that g also satisfies Assumption 2.6 with the same ex- p onen ts and limit function g ∗ ( y , w ) = max i =1 ,...,K ( − a ⊤ i y + b ⊤ i w ) . This form ulation represen ts the canonical form of linear join t chance constraints in CCO [54]. 16 5 Numerical Exp erimen ts In this section, we numerically ev aluate the p erformance of our proposed decision-scaled SA against the classical SA. W e demonstrate the efficacy of our metho d on three existing b enc hmark prob- lems from the literature: p ortfolio optimization, reliabilit y-based short column design, and norm optimization. Throughout all exp erimen ts, w e maintain a confidence lev el of β = 0 . 01 and ev aluate the decision-scaling approach across three rare-even t risk tolerance lev els: ε ∈ { 10 − 3 , 10 − 4 , 10 − 5 } . T o ensure a fair comparison across the three b enc hmarks, w e select the scaling parameter s such that s α ∈ { 1 . 1 , 1 . 2 } , where α is the tail index of the resp ective problem’s underlying distribution. Since our approach reduces sample complexit y from O ( ε − 1 ) to O ( ε − 1 /s α ) , fixing s α guaran tees an iden tical p olynomial reduction in required sample sizes across all three exp erimen ts. F or eac h configuration, the deterministic sample size requirement is presented via line graphs. T o ensure statistical v alid- it y , we p erform 100 independent trials p er configuration to accoun t for sampling v ariability . The exp erimen tal results are visualized using line plots, where markers denote the median v alue across these trials and error bars represent the interquartile range (the 25th and 75th p ercen tiles). Out- of-sample v alidation is conducted via Monte Carlo sim ulation using 10 4 /ε indep endent samples to pro vide sufficient statistical precision for estimating violation probabilities at eac h risk tolerance lev el. The numerical exp erimen ts w ere implemented in Julia 1.11.2 using the JuMP 1.30.0 mo deling pac k age. The portfolio optimization problems were solved using Gurobi 13.0.0, while the short column and the norm optimization problems w ere solved using Mosek 11.0.27. A computational time limit of one hour w as imp osed on each run. All computations w ere executed on high-p erformance computing no des equipp ed with 2.90 GHz In tel Xeon Gold 6226R CPU and 32 GB of RAM. The source co de and data used to repro duce the numerical experiments are a v ailable at htt ps://gi th ub.com/Subramanyam- Lab/Decision- Scaled- Scenario- Approach . 5.1 P ortfolio Optimization Our first b enc hmark is a p ortfolio optimization problem studied by Blanc het et al. [9], Xie and Ahmed [62]. The goal is to allo cate capital across n differen t assets to maximize the total return while managing do wnside risk. Let x = ( x 1 , . . . , x n ) ⊤ ∈ R n ≥ 0 represen t the amoun t of capital in vested in eac h asset. F or eac h asset i = 1 , . . . , n , w e asso ciate an exp ected return per dollar inv ested, µ i , and a non-negative random loss per dollar inv ested ξ i . Let µ = ( µ 1 , . . . , µ n ) ⊤ and ξ = ( ξ 1 , . . . , ξ n ) ⊤ b e the corresp onding v ectors of exp ected returns and random losses. The total exp ected return of the p ortfolio is thus µ ⊤ x , and the total random loss is x ⊤ ξ . W e aim to maximize the p ortfolio’s exp ected return sub ject to a V alue-at-Risk constrain t. This constrain t requires that the probability of the total p ortfolio loss x ⊤ ξ exceeding a predefined thresh- old η > 0 m ust b e no greater than a small risk tolerance level ε . This leads to the follo wing CCO problem: maximize x ∈ R n ≥ 0 µ ⊤ x sub ject to P ξ ( { z : x ⊤ z ≤ η } ) ≥ 1 − ϵ. (34) The problem’s constrain t function is g ( x , z ) : = x ⊤ z − η . This function satisfies Assumption 2.6 with scaling exp onen ts γ = − 1 and ρ = 0 , whic h yields the limit function g ∗ ( y , w ) = y ⊤ w − η . The 17 corresp onding decision-scaled scenario problem (SSP N ,s ) for (34) with parameter s ≥ 1 is maximize x ∈ R n ≥ 0 µ ⊤ x sub ject to x ⊤ z ( j ) ≤ η s , ∀ j = 1 , . . . , N , (35) where the n umber of samples is N = N ε 1 /s α , β . T est setup. W e construct a test instance with n = 20 assets as follows. • The comp onen ts of the exp ected return vector, µ i , are drawn indep enden tly from a uniform distribution, µ i ∼ Uniform[1 , 3] . • The non-negativ e random loss ξ is assumed to follow an indep enden t W eibull distribution, aligning with standard practices in financial risk managemen t [41]. This choice of distribution satisfies Assumption 2.3. – The shap e parameter is set to k i = 0 . 9 for all i = 1 , . . . , n , which corresp onds to a tail index of α = 0 . 9 . – The scale parameters, σ i , are drawn indep endently from a uniform distribution, σ i ∼ Uniform[2 , 10] . • The total loss threshold is set to η = 1000 . 10 −5 10 −4 10 −3 ε 10 5 10 6 Required Samples Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 10 0 10 1 10 2 CPU Time (Seconds) Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 1. Comparison of computational efficiency for the portfolio optimization problem: (left) Required sample size; (righ t) CPU time. Numerical results. Figure 1 demonstrates computational efficiency of our prop osed decision- scaled SA. The left panel illustrates that for any giv en risk lev el ε our approac h requires substan tially few er samples than the classical metho d. This reduction in sample size directly translates to a decrease in computation time, as shown in the righ t panel, where the efficiency gain increases with the scaling factor s . Figure 2 examines the quality of the obtained solutions. The left panel presen ts the estimated violation probabilit y . All metho ds reliably pro duce solutions with estimated violation probabilities 18 10 −5 10 −4 10 −3 ε 10 −6 10 −5 10 −4 Violation Probability Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 130 140 150 160 170 180 Objective V alue Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 2. Comparison of solution quality for the portfolio optimization problem: (left) Ob jectiv e v alue; (right) Violation probability . b elo w the target lev el ε . Ho wev er, our prop osed metho dology yields solutions that are slightly more conserv ative, and this tendency increases with the scaling factor s . As sho wn in the right panel, this increased conserv atism results in a marginally lo wer ob jective v alue, represen ting the computational efficiency premium. The scaling factor s serves as a tunable h yp erparameter. It offers practitioners a clear trade-off, allo wing them to strik e a balance b et w een computational efficiency and solution conserv atism to b est suit their sp ecific requirements. 5.2 Reliabilit y-based short column design Next, we consider the short column design problem, a classic b enc hmark in reliabilit y-based de- sign [2], follo wing the formulation from T ong et al. [57]. The ob jectiv e is to determine the optimal dimensions of a short column with a rectangular cross-section. The decision v ector is x = ( x w , x h ) ⊤ , where x w and x h are the width and height of the cross-section, respectively . The column is sub jected to uncertain loads, represen ted b y the random vector ξ = ( ξ M , ξ F ) ⊤ , where ξ M ≥ 0 is the b ending moment and ξ F ≥ 0 is the axial force. The material’s yield stress, C Y ≥ 0 , is a known constan t. The design goal is to minimize the cross-sectional area x w x h , while ensuring that the probabilit y of material failure do es not exceed a small tolerance ε . T o ensure a physically realistic design, the dimensions are constrained to low er b ounds L w > 0 and L h > 0 , resp ectively . The resulting CCO problem is form ulated as follows. minimize x x w x h sub ject to P ξ ( z M , z F ) ⊤ : 4 z M C Y x w x 2 h + z 2 F C 2 Y x 2 w x 2 h ≤ 1 ≥ 1 − ε, x w ≥ L w , x h ≥ L h . (36) W e apply a change of v ariables: x w = exp( ˜ x w ) , x h = exp( ˜ x h ) , z M = exp( ˜ z M ) , and z F = exp( ˜ z F ) . 19 Applying this transformation to the inequalit y inside the chance constrain t yields: 4 exp ( ˜ z M − ˜ x w − 2 ˜ x h ) C Y + exp (2 ˜ z F − 2 ˜ x w − 2 ˜ x h ) C 2 Y ≤ 1 . (37) T aking the logarithm on b oth sides and letting ˜ x = ( ˜ x w , ˜ x h ) ⊤ and ˜ z = ( ˜ z M , ˜ z F ) ⊤ , the corresp onding constrain t function can b e defined as g ( ˜ x , ˜ z ) : = log 4 exp ( ˜ z M − ˜ x w − 2 ˜ x h ) C Y + exp (2 ˜ z F − 2 ˜ x w − 2 ˜ x h ) C 2 Y . (38) F ollowing Example 4.10, this constrain t function g ( ˜ x , ˜ z ) satisfies Assumption 2.6 with scaling ex- p onen ts ( γ , ρ ) = (1 , 1) and the limit function g ∗ ( y , w ) = max { w M − y w − 2 y h , 2 w F − 2 y w − 2 y h } . Consequen tly , applying our decision-scaling method yields the scaled scenario problem with a parameter s ≥ 1 . In tro ducing auxiliary v ariables u ( j ) M and u ( j ) F to ensure compatibility with standard exp onen tial cone solv ers, we solve the follo wing optimization problem: minimize ˜ x ,u ( j ) M ,u ( j ) F ˜ x w + ˜ x h sub ject to exp log 4 C Y + ˜ z ( j ) M − ( ˜ x w + 2 ˜ x h ) s ≤ u ( j ) M , ∀ j = 1 , . . . , N , exp log 1 C 2 Y + 2 ˜ z ( j ) F − 2( ˜ x w + ˜ x h ) s ≤ u ( j ) F , ∀ j = 1 , . . . , N , u ( j ) M + u ( j ) F ≤ 1 , ∀ j = 1 , . . . , N , ˜ x w ≥ log L w , ˜ x h ≥ log L h , (39) where N = N ε 1 /s α , β as b efore. T est setup. W e construct a test instance using the follo wing parameters: • The deterministic parameters are set to C Y = 5 , L w = 5 , and L h = 15 . • The uncertain load vector ξ is assumed to follow a multiv ariate lognormal distribution. Sp ecif- ically , the log-transformed v ector (log ξ M , log ξ F ) ⊤ follo ws a multiv ariate normal distribution with a mean v ector µ = [6 , 7 . 5] ⊤ and a co v ariance matrix Σ = 0 . 4 0 . 2 0 . 2 0 . 4 . • This ensures that the underlying uncertaint y in our reform ulated problem satisfies Assump- tion 2.3 with a tail index of α = 2 . Numerical Results. Figure 3 presen ts a comparison of the computational efficiency b et ween the classical and decision-scaled SA. Consisten t with the findings in Section 5.1, the decision-scaled metho d is more computationally efficien t. The left panel sho ws that for any giv en risk level ε , the scaled approac hes require few er samples than the classical metho d, with the required sample size decreasing as the scaling factor s increases. This reduction in sample complexity directly translates to a decrease in computation time, as illustrated in the right panel. Figure 4 ev aluates the quality of the solutions obtained b y each metho d. The left panel shows that all three methods generate conserv ative solutions, with estimated violation probabilities falling 20 10 −5 10 −4 10 −3 ε 10 4 10 5 10 6 Required Samples Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 10 0 10 1 10 2 CPU Time (Seconds) Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 3. Comparison of computational efficiency for the short column design problem: (left) Required sample size; (righ t) CPU time. 10 −5 10 −4 10 −3 ε 10 −8 10 −7 10 −6 10 −5 10 −4 Violation Probability Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 8.25 8.50 8.75 9.00 9.25 9.50 9.75 Objective V alue Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 4. Comparison of solution qualit y for the short column problem: (left) Ob jective v alue; (righ t) Violation probability . 21 b elo w the prescrib ed tolerance lev el ε across all test cases. The decision-scaled method yields solutions that are more conserv ative than the classical approach, and this conserv atism increases with the scaling factor s . Consequently , as sho wn in the righ t panel, this increased robustness results in a higher ob jectiv e v alue for the scaled metho ds. This marginal cost in the ob jectiv e function is the trade-off for the reduced computation time. 5.3 Norm optimization Our final n umerical exp erimen t examine the norm optimization problem sub ject to a joint chance constrain t, adapted from the formulation presented b y Hong et al. [31]. Let x ∈ R n ≥ 0 denote a decision vector, and let ξ ∈ R m × n b e a random matrix whose row v ectors are defined as ξ i = ( ξ i 1 , . . . , ξ in ) for i = 1 , . . . , m . The ob jectiv e is to maximize the sum of the decision v ariables while ensuring that a set of m quadratic inequalities holds simultaneously with a probabilit y of at least 1 − ε . The CCO problem is formulated as follo ws: maximize x ∈ R n ≥ 0 1 ⊤ x sub ject to P ξ z : n X j =1 z 2 ij x 2 j ≤ 100 , ∀ i = 1 , . . . , m ≥ 1 − ϵ. (40) This joint chance constrain t can b e equiv alently reformulated using a single maximum function ov er the individual constrain ts: maximize x ∈ R n ≥ 0 1 ⊤ x sub ject to P ξ z : max i =1 ,...,m n X j =1 z 2 ij x 2 j − 100 ≤ 0 ≥ 1 − ϵ, (41) W e define the individual constraint functions as g i ( x , z ) : = P n j =1 z 2 ij x 2 j − 100 for i = 1 , . . . , m . Eac h g i satisfies Assumption 2.6 with scaling exp onen ts ( γ , ρ ) = ( − 1 , 0) and a limit function g ∗ i ( y , w ) = P n j =1 w 2 ij y 2 j − 100 , where y ∈ R n and w ∈ R m × n . By applying Proposition 4.12, the constrain t function g ( x , z ) : = max i =1 ,...,m g i ( x , z ) satisfies Assumption 2.6 with iden tical scaling exp onen ts ( γ , ρ ) = ( − 1 , 0) and the limit function g ∗ ( y , w ) = max i =1 ,...,m g ∗ i ( w , z ) . Therefore, applying our decision-scaled SA with a parameter s ≥ 1 yields the follo wing scenario problem: maximize x ∈ R n ≥ 0 1 ⊤ x sub ject to s 2 n X j =1 z ( k ) ij 2 x 2 j ≤ 100 , ∀ i = 1 , . . . , m, ∀ k = 1 , . . . , N , (42) where N = N ε 1 /s α , β . T est setup. W e ev aluate the metho ds on a test instance with n = 5 v ariables and m = 3 constrain ts using the following parameters: • Random v ariables ξ ij for i = 1 , . . . , m and j = 1 , . . . , n are assumed to follo w a normal distribution with mean j /n and v ariance 1. Moreov er, Cov( ξ ij , ξ i ′ j ) = 0 . 5 when i = i ′ and Co v ( ξ ij , ξ i ′ j ′ ) = 0 when j = j ′ . • This multiv ariate normal uncertaint y satisfies Assumption 2.3 with a tail index of α = 2 . 22 10 −5 10 −4 10 −3 ε 10 4 10 5 10 6 Required Samples Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 10 1 10 2 10 3 CPU Time (Seconds) Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 5. Comparison of computational efficiency for the norm optimization problem: (left) Re- quired sample size; (right) CPU time. At ε = 10 − 5 , the classical SA failed to find a feasible solution for an y of the 100 test instances. Numerical Results. Figure 5 illustrates the computational p erformance of our decision-scaled approac h. Consisten t with prior exp eriments, the scaled metho ds ac hieve substantial reductions in b oth sample size requiremen ts and CPU time, with these adv antages b ecoming more pronounced at more stringent risk levels. Notably , b ecause the norm optimization formulation features a joint c hance constraint, eac h generated scenario in tro duces m distinct constrain ts. This results in a total of m × N constrain ts in the optimization mo del, severely comp ounding the memory burden. Con- sequen tly , at the extreme rare-even t tolerance of ε = 10 − 5 , the classical SA failed to find a feasible solution for any of the 100 test instances. In con trast, the decision-scaled SA successfully solv ed all instances across every risk level. This demonstrates that our metho d, b ey ond computational efficiency , provides essential robustness for solving problems in extreme rare-even t settings where the classical approac h b ecomes in tractable. 10 −5 10 −4 10 −3 ε 10 −6 10 −5 10 −4 Violation Probability Classical Scaled ( s α =1.1) Scaled ( s α =1.2) 10 −5 10 −4 10 −3 ε 7.00 7.25 7.50 7.75 8.00 8.25 8.50 8.75 Objective V alue Classical Scaled ( s α =1.1) Scaled ( s α =1.2) Figure 6. Comparison of solution qualit y for the norm optimization problem: (left) Ob jectiv e v alue; (righ t) Violation probabilit y . At ε = 10 − 5 , the classical SA failed to find a feasible solution for an y of the 100 test instances. 23 Figure 6 examines the solution qualit y trade-offs. As with the previous b enc hmarks, all metho ds pro duce solutions with violation probabilities well b elow the target ε (left panel), and the scaled metho ds yield slightly lo wer ob jective v alues (right panel). Ho wev er, at ε = 10 − 5 , this comparison b ecomes mo ot since only the scaled metho ds pro duce any solutions at all. 6 Pro ofs 6.1 Supp orting Lemmas for the Pro of of Theorem 3.3 This section develops the tec hnical mac hinery needed for the proof of our main result, Theorem 3.3. The developmen t pro ceeds in three stages. First, w e recall the notion of con tinuous con vergence and establish its key consequences in Lemmas 6.2 and 6.3. Second, w e deriv e structural properties of the limit function λ arising from the m ultiv ariate regular v ariation in Assumption 2.3 in Lemmas 6.4 and 6.5. Third, we establish a primitiv e large deviation principle in Lemma 6.6 that underpins the asymptotic analysis of c hance constraints; this result forms the foundation on whic h the pro of of Theorem 3.3 rests. W e b egin with contin uous conv ergence, which pro vides the framew ork for analyzing the asymp- totic b eha vior of constrain t functions. Definition 6.1. A sequence of real-v alued functions { f u } is said to con verge contin uously to a real-v alued limit function f ∗ if, for all con vergen t sequences x u → x , lim u →∞ f u ( x u ) = f ∗ ( x ) . (43) Notably , equations (4) and (7) are direct applications of this conv ergence. Next, w e establish useful prop erties of contin uous conv ergence. Lemma 6.2 ([49], Theorem 7.14) . A se quenc e of r e al-value d functions { f u } c onver ges c ontinuously to f ∗ if and only if f ∗ is c ontinuous and { f u } c onver ges uniformly to f ∗ on al l c omp act sets. Lemma 6.3. If a se quenc e of functions { f u } c onver ges c ontinuously to a function f ∗ , then for any M , δ > 0 and a ∈ R , ther e exists u 0 such that for al l u > u 0 , we have (i) L M >a ( f u ) ⊆ L ≥ a ( f ∗ ) + B δ ; (ii) L M >a + δ ( f ∗ ) ⊆ L M >a ( f u ) . Pr o of. Let M , δ > 0 and a ∈ R b e arbitrary . P art (i) . Note that L ≥ a ( f ∗ ) + B δ = { y : ∥ y − w ∥ ≤ δ for some w with f ∗ ( w ) ≥ a } . Suppose, for the sake of contradiction, that for ev ery u there exists some z ∈ L M >a ( f u ) with z / ∈ L ≥ a ( f ∗ ) + B δ . This implies that z ∈ B M with f u ( z ) > a , yet f ∗ ( w ) < a for all w ∈ B δ ( z ) . In particular, f ∗ ( z ) < a . No w, set ϵ : = a − f ∗ ( z ) > 0 and choose η ∈ (0 , δ ) . Then, Lemma 6.2 implies that there exists u 0 suc h that, for all u > u 0 , w e hav e | f u ( w ) − f ∗ ( z ) | < ϵ 2 , ∀ w ∈ B η ( z ) . Consequen tly , for all w ∈ B η ( z ) w e obtain f u ( w ) < f ∗ ( z ) + ϵ 2 = f ∗ ( z ) + a − f ∗ ( z ) 2 = f ∗ ( z ) + a 2 < a. 24 Setting w = z yields f u ( z ) < a , contradicting our assumption that f u ( z ) > a . P art (ii) . F rom Lemma 6.2, there exists u 0 suc h that, for all u > u 0 and all w ∈ B M , | f u ( w ) − f ∗ ( w ) | < δ. Therefore, for an y z ∈ L M >a + δ ( f ∗ ) , i.e., z ∈ B M and f ∗ ( z ) > a + δ , w e hav e f u ( z ) > f ∗ ( z ) − δ > ( a + δ ) − δ = a, whic h implies z ∈ L M >a ( f u ) . No w, we introduce fundamental properties of M R V class. Lemma 6.4. If f ∈ M R V ( Z , h, f ∗ ) with h ∈ R V ( κ ) , then f ∗ is p ositively homo gene ous of de gr e e κ . Pr o of. F or any t > 0 and z ∈ Z , f ∗ ( t z ) = lim u →∞ f ( ut z ) /h ( u ) . Substituting ν = ut yields f ∗ ( t z ) = lim ν →∞ f ( ν z ) h ( ν /t ) = t κ lim ν →∞ f ( ν z ) h ( ν ) = t κ f ∗ ( z ) . W e now examine how Assumption 2.3 lev erages properties of M R V classes to characterize the uncertain ty distribution. Lemma 6.5. Under Assumption 2.3, λ ( z ) = 0 if and only if z = 0 . Pr o of. First, λ ( 0 ) = 0 follo ws directly since λ ( 0 ) = lim u →∞ Q ( 0 ) /q ( u ) = 0 as Q ( 0 ) is finite while q ( u ) → ∞ as u → ∞ . F or the conv erse, let K : = Ξ ∩ { θ ∈ R m : ∥ θ ∥ = 1 } denote the intersection of Ξ with the unit sphere. F or an y nonzero z ∈ Ξ , setting θ = z / ∥ z ∥ gives θ ∈ K b ecause Ξ is a cone. By Lemma 6.4, w e ha ve λ ( z ) = λ ( ∥ z ∥ θ ) = ∥ z ∥ α λ ( θ ) . Since λ ( θ ) > 0 for all θ ∈ K by Assumption 2.3, we hav e λ ( z ) > 0 . The follo wing lemma establishes that for the primitiv e scaling z 7→ z /u , the distribution P ξ satisfies a large deviation principle with rate function λ and sp eed q ( u ) . This result pro vides the main to ol for the asymptotic analysis in Section 6.4 and ultimately the pro of of Theorem 3.3. Lemma 6.6. Under Assumption 2.3, for every close d set C ⊆ R m , lim sup u →∞ log P ξ ( { z : z /u ∈ C } ) q ( u ) ≤ − inf z ∈C λ ( z ) , (44) and for every op en set O ⊆ R m , lim inf u →∞ log P ξ ( { z : z /u ∈ O } ) q ( u ) ≥ − inf z ∈O λ ( z ) . (45) 25 Pr o of. Consider an arbitrary compact ball B δ ( c ) ⊂ R m . Under the change of v ariables z = u ν (so that d z = u m d ν and z /u ∈ B δ ( c ) b ecomes ν ∈ B δ ( c ) ), and using Assumption 2.3 to write the densit y as exp( − Q ( z )) , lim u →∞ log P ξ ( { z : z /u ∈ B δ ( c ) } ) q ( u ) = lim u →∞ log R B δ ( c ) exp( − Q ( u ν )) u m d ν q ( u ) = lim u →∞ m log u + log R B δ ( c ) exp( − Q ( u ν )) d ν q ( u ) = lim u →∞ log R B δ ( c ) exp( − Q ( u ν )) d ν q ( u ) . The third equality holds b ecause Assumption 2.3 implies q ∈ R V ( α ) with α > 0 , which in turn implies m log u/q ( u ) → 0 as u → ∞ . Since Q ∈ M R V , the con tinuous conv ergence in (4) implies uniform conv ergence of Q ( u ν ) /q ( u ) → λ ( ν ) on the compact set B δ ( c ) b y Lemma 6.2. In particular, for any ϵ > 0 and all u sufficien tly large, | Q ( u ν ) /q ( u ) − λ ( ν ) | ≤ ϵ for all ν ∈ B δ ( c ) , yielding uniform b ounds on the in tegrand: exp( − q ( u )( λ ( ν ) + ϵ )) ≤ exp( − Q ( u ν )) ≤ exp( − q ( u )( λ ( ν ) − ϵ )) . In tegrating, taking logarithms, dividing b y q ( u ) , and noting that ϵ > 0 is arbitrary gives: lim u →∞ log R B δ ( c ) exp( − Q ( u ν )) d ν q ( u ) = lim u →∞ log R B δ ( c ) exp( − q ( u ) λ ( ν )) d ν q ( u ) . Since λ ( · ) is contin uous and the in tegration on the righ t-hand side is o ver a compact set, an appli- cation of V aradhan’s Integral Lemma [22, Theorem 4.3.1] implies that it m ust b e equal to: lim u →∞ log R B δ ( c ) exp( − Q ( u ν )) d ν q ( u ) = − inf ν ∈B δ ( c ) λ ( ν ) . (46) F or an y compact K ⊆ R m and δ > 0 , compactness ensures that K admits a finite cov er; i.e., K ⊆ K δ : = S K i =1 B δ ( c i ) for some c i ∈ K , i = 1 , . . . , K . Th us, lim sup u →∞ log P ξ ( { z : z /u ∈ K ) } q ( u ) ≤ max i =1 ,...,K lim sup u →∞ log P ξ ( { z : z /u ∈ B δ ( c i ) } ) q ( u ) = − min i =1 ,...,K inf z ∈B δ ( c i ) λ ( z ) = − inf z ∈K δ λ ( z ) . The first inequality follows from the union bound P ξ ( K ) ≤ P K i =1 P ξ ( B δ ( c i )) ≤ K max i =1 ,...,K P ξ ( B δ ( c i )) and by taking logarithms and dividing by q ( u ) which ensures log K /q ( u ) → 0 since K is fixed. The second equality applies the result (46) to each B δ ( c i ) . The third equality rewrites the minimum of infima o ver finitely many sets as a single infim um ov er their union K δ . Since the ab o ve bound holds for ev ery δ > 0 , we may tak e δ → 0 : lim sup u →∞ log P ξ ( { z : z /u ∈ K ) } q ( u ) ≤ − lim δ → 0 inf z ∈K δ λ ( z ) = − inf z ∈K λ ( z ) . The final equality holds b ecause of compactness of K and con tinuit y of λ together with T δ > 0 K δ = K . 26 No w, for an y op en set O and any δ > 0 , c ho ose a near-minimizer z ∗ ∈ O satisfying λ ( z ∗ ) ≤ inf z ∈O λ ( z ) + δ . Since O is op en, there exists d > 0 such that B d ( z ∗ ) ⊂ O . Therefore, lim inf u →∞ log P ξ ( { z : z /u ∈ O } ) q ( u ) ≥ lim inf u →∞ log P ξ ( { z : z /u ∈ B d ( z ∗ ) } ) q ( u ) = − inf z ∈B d ( z ∗ ) λ ( z ) ≥ − λ ( z ∗ ) ≥ − inf z ∈O λ ( z ) − δ. The first inequality holds b y monotonicit y of probability , since B d ( z ∗ ) ⊂ O . The second equalit y applies (46). The third inequality uses that z ∗ ∈ B d ( z ∗ ) , so the infim um o ver the ball is at most λ ( z ∗ ) . The fourth inequality follo ws from the c hoice of z ∗ . Since δ > 0 was arbitrary , taking δ → 0 yields lim inf u →∞ log P ξ ( { z : z /u ∈ O } ) q ( u ) ≥ − inf z ∈O λ ( z ) . Finally , w e extend the upp er b ound from compact to arbitrary closed sets. By Lemma 6.4, λ is p ositiv ely homogeneous (of degree α > 0 ), whic h together with its contin uity implies that λ has compact lev el sets, i.e., { z : λ ( z ) ≤ a } is compact for all a > 0 . This provides the go o dness condition required to apply Dem b o and Zeitouni [22, Lemma 1.2.18], whic h yields the extension from compact to closed sets. 6.2 Prop erties of the Limiting Constrain t F unction This section establishes structural prop erties of the limit function g ∗ arising from Assumption 2.6. Sp ecifically , Lemma 6.7 sho ws that g ∗ inherits a homogeneit y structure from the scaling in Assump- tion 2.6, Lemma 6.8 characterizes the b eha vior of g ∗ near the origin, and Lemma 6.9 establishes its inherited con vexit y . These properties are used in the proof of Prop osition 2.7 (uniqueness of scaling exp onen ts), with the homogeneity and origin prop erties also playing a supp orting role in the large deviation analysis of Section 6.4. Lemma 6.7. Under Assumption 2.6, the limit function g ∗ is homo gene ous in the sense that, for al l y ∈ X ∞ γ , w ∈ Ξ , and t > 0 , g ∗ ( t γ y , t w ) = t ρ g ∗ ( y , w ) . Pr o of. Let y ∈ X ∞ γ , w ∈ Ξ , and t > 0 . By Assumption (A1), g ∗ ( t γ y , t w ) = lim u →∞ g ( x ′ u , z ′ u ) u ρ (47) for an y sequences { x ′ u } ∈ X and { z ′ u } ∈ Ξ satisfying lim u →∞ x ′ u /u γ = t γ y and lim u →∞ z ′ u /u = y . W e apply a c hange of v ariable v = ut , whic h implies u = v /t . As u → ∞ , we also hav e v → ∞ since t > 0 . Let us define new sequences { x v } = { x ′ v /t } and { z v } = { z ′ v /t } . Substituting u = v /t yields g ∗ ( t γ y , t w ) = lim v →∞ g ( x v , z v ) ( v /t ) ρ = t ρ lim v →∞ g ( x v , z v ) v ρ . (48) 27 Since these new sequences satisfy lim v →∞ x v v γ = lim v →∞ x ′ v /t ( ut ) γ = 1 t γ lim u →∞ x ′ u u γ = 1 t γ ( t γ y ) = y (49) and lim v →∞ z v v = lim v →∞ z ′ v /t ut = 1 t lim u →∞ z ′ u u = 1 t ( t w ) = w , (50) w e ha ve lim v →∞ g ( x v , z v ) /v ρ = g ∗ ( y , w ) . Com bining with (48), g ∗ ( t γ y , t w ) = t ρ g ∗ ( y , w ) follo ws. Lemma 6.8. Under Assumption 2.6, (i) If ρ > 0 , then g ∗ ( 0 , 0 ) = 0 ; (ii) If ρ = 0 , then g ∗ ( 0 , w ) < 0 for al l w ∈ Ξ . Pr o of. P art (i) . Using the homogeneity prop erty in Lemma 6.7, for any t > 0 , g ∗ ( 0 , 0 ) = g ∗ ( t γ 0 , t 0 ) = t ρ g ∗ ( 0 , 0 ) . Since this equalit y holds for all t > 0 , we hav e g ∗ ( 0 , 0 ) = 0 . P art (ii) . W e first show g ∗ ( 0 , w ) = g ∗ ( 0 , 0 ) , for all w ∈ Ξ . By Lemma 6.7, for all t > 0 and w ∈ Ξ , g ∗ ( 0 , t w ) = g ∗ ( t γ 0 , t w ) = t 0 g ∗ ( 0 , w ) = g ∗ ( 0 , w ) . Since con tinuit y of g ∗ implies lim t → 0 + g ∗ ( 0 , t w ) = g ∗ ( 0 , lim t → 0 + t w ) = g ∗ ( 0 , 0 ) , w e hav e g ∗ ( 0 , w ) = g ∗ ( 0 , 0 ) , ∀ w ∈ Ξ . (51) No w, we show g ∗ ( 0 , 0 ) < 0 . T ake any y ∈ X ∞ γ \ { 0 } . F rom Lemma 6.7, g ∗ ( t γ y , 0 ) = g ∗ ( t γ y , t 0 ) = t ρ g ∗ ( y , 0 ) = g ∗ ( y , 0 ) ∀ t > 0 . (52) If γ > 0 , since g ∗ is con tinuous, g ∗ ( 0 , 0 ) = lim t → 0 + g ∗ ( t γ y , 0 ) . Similarly , if γ < 0 , g ∗ ( 0 , 0 ) = lim t →∞ g ∗ ( t γ y , 0 ) . In b oth cases, we ha ve g ∗ ( 0 , 0 ) = g ∗ ( y , 0 ) following from (52). Giv en that g ∗ ( y , 0 ) < 0 from (A3), w e obtain g ∗ ( 0 , 0 ) < 0 . Com bining with (51), we conclude g ∗ ( 0 , w ) < 0 , for all w ∈ Ξ . 28 Lemma 6.9. Under Assumption 2.6, for any fixe d w ∈ Ξ , the limit function g ∗ ( · , w ) is c onvex on X ∞ γ . Pr o of. Let w ∈ Ξ b e fixed. Cho ose any y 1 , y 2 ∈ X ∞ γ and any scalar t ∈ [0 , 1] . W e aim to sho w that g ∗ ( t y 1 + (1 − t ) y 2 , w ) ≤ tg ∗ ( y 1 , w ) + (1 − t ) g ∗ ( y 2 , w ) . By the definition of X ∞ γ , there exist sequences { x 1 ,u } ⊂ X and { x 2 ,u } ⊂ X suc h that lim u →∞ x 1 ,u /u γ = y 1 and lim u →∞ x 2 ,u /u γ = y 2 . Because the feasible set X is con vex, the con vex com bination x u : = t x 1 ,u + (1 − t ) x 2 ,u satisfies x u ∈ X for all u . Hence, lim u →∞ x u /u γ = t y 1 + (1 − t ) y 2 . No w, let { z u } ⊂ Ξ be an y sequence satisfying lim u →∞ z u /u = w . Suc h a sequence alwa ys exists, since Ξ is a closed cone (Remark 2.4). Since the constraint function g ( · , z ) is con vex for an y fixed z , for eac h u , we obtain g ( x u , z u ) = g ( t x 1 ,u + (1 − t ) x 2 ,u , z u ) ≤ tg ( x 1 ,u , z u ) + (1 − t ) g ( x 2 ,u , z u ) . Dividing b oth sides of the inequalit y by u ρ yields g ( x u , z u ) u ρ ≤ t g ( x 1 ,u , z u ) u ρ + (1 − t ) g ( x 2 ,u , z u ) u ρ . T aking the limit as u → ∞ on b oth sides, and applying the contin uous con vergence prop ert y from Assumption (A1), w e obtain lim u →∞ g ( x u , z u ) u ρ ≤ t lim u →∞ g ( x 1 ,u , z u ) u ρ + (1 − t ) lim u →∞ g ( x 2 ,u , z u ) u ρ , whic h simplifies to g ∗ ( t y 1 + (1 − t ) y 2 , w ) ≤ tg ∗ ( y 1 , w ) + (1 − t ) g ∗ ( y 2 , w ) . 6.3 Pro of of Prop osition 2.7 Pr o of. Supp ose there exist tw o pairs ( γ 1 , ρ 1 , g ∗ 1 ) and ( γ 2 , ρ 2 , g ∗ 2 ) satisfying Assumption 2.6. W e prov e uniqueness b y showing that the following cases lead to con tradictions: (i) γ 1 = γ 2 and ρ 1 = ρ 2 ; (ii) γ 1 = γ 2 and ρ 1 = ρ 2 ; (iii) γ 1 = γ 2 and ρ 1 = ρ 2 . Case (i) . Without loss of generality , assume ρ 1 > ρ 2 and γ = γ 1 = γ 2 . Fix y ∈ X ∞ γ \ { 0 } and w = 0 . By the definition of X ∞ γ , there exists a sequence { x u } ⊂ X such that lim u →∞ x u /u γ = y . F rom Assumption (A1), w e hav e g ∗ 1 ( y , 0 ) = lim u →∞ g ( x u , 0 ) u ρ 1 = lim u →∞ g ( x u , 0 ) u ρ 2 · u ρ 2 u ρ 1 = g ∗ 2 ( y , 0 ) · lim u →∞ u ρ 2 u ρ 1 = 0 , con tradicting (A3). Case (ii) . Without loss of generality , assume ρ 1 > ρ 2 . W e examine four sub cases: 29 (a) γ 1 < γ 2 ; (b) 0 < γ 2 < γ 1 ; (c) γ 2 < γ 1 < 0 ; (d) γ 2 < 0 < γ 1 . Case (iia) . Fix y ∈ X ∞ γ 1 \ { 0 } . Then, there exists a sequence { x u } ⊂ X such that lim u →∞ x u /u γ 1 = y . F rom Assumption (A1), w e hav e g ∗ 1 ( y , 0 ) = lim u →∞ g ( x u , 0 ) u ρ 1 = lim u →∞ g ( x u , 0 ) u ρ 2 · u ρ 2 u ρ 1 . (53) Since γ 1 < γ 2 , { x u } satisfies lim u →∞ x u u γ 2 = lim u →∞ x u u γ 1 · u γ 1 u γ 2 = 0 . Note that 0 ∈ X ∞ γ 2 as it is a cone. Therefore, we obtain lim u →∞ g ( x u , 0 ) /u ρ 2 = g ∗ 2 ( 0 , 0 ) , whic h is a finite v alue. Since ρ 1 > ρ 2 , combining with (53) yields g ∗ 1 ( y , 0 ) = 0 , contradicting Assumption (A3). Case (iib) . F rom Assumption (A4), there exist y ∈ X ∞ γ 1 \ { 0 } and w ∈ Ξ such that g ∗ 1 ( y , w ) > 0 . Fix suc h y and w . Then, there exist sequences { x u } ⊂ X and { z u } ⊂ Ξ suc h that lim u →∞ x u u γ 1 = y and lim u →∞ z u u = w . (54) Let v = u γ 1 /γ 2 . Since γ 1 /γ 2 > 1 , v → ∞ as u → ∞ . By Assumption (A1), g ∗ 1 ( y , w ) = lim u →∞ g ( x u , z u ) u ρ 1 = lim u →∞ g ( x u , z u ) u γ 1 γ 2 ρ 2 · u γ 1 γ 2 ρ 2 u ρ 1 = lim v →∞ g ( x v γ 2 /γ 1 , z v γ 2 /γ 1 ) v ρ 2 · v ρ 2 v γ 2 γ 1 ρ 1 . (55) Here, x v γ 2 /γ 1 and z v γ 2 /γ 1 denote the re-parameterization of the sequence { x u } and { z u } using u = v γ 2 /γ 1 . Given that lim v →∞ x v γ 2 /γ 1 v γ 2 = lim u →∞ x u u γ 1 = y (56) and lim v →∞ z v γ 2 /γ 1 v = lim u →∞ z u u γ 1 /γ 2 = lim u →∞ z u u · u u γ 1 /γ 2 = 0 , (57) w e obtain lim v →∞ g ( x v γ 2 /γ 1 , z v γ 2 /γ 1 ) v ρ 2 = g ∗ 2 ( y , 0 ) . (58) Therefore, equalit y (55) reduces to g ∗ 1 ( y , w ) = 0 , if ρ 2 < γ 2 γ 1 ρ 1 , g ∗ 2 ( y , 0 ) , if ρ 2 = γ 2 γ 1 ρ 1 , −∞ , if ρ 2 > γ 2 γ 1 ρ 1 . (59) 30 F urthermore, g ∗ 2 ( y , 0 ) < 0 follows from Assumption (A3). As a result, in eac h case, g ∗ 1 ( y , w ) ≤ 0 for all w , contradicting Assumption (A4). Case (iic) . Let v = u γ 2 /γ 1 . By the same argument of Case (iib), this case also leads to a contra- diction. Case (iid) . Cho ose any y ∈ X ∞ γ 2 and w ∈ Ξ . By the definition 2.5 and Remark 2.4, there exist sequences { x u } and { z u } suc h that lim u →∞ x u /u γ 2 = y and lim u →∞ z u /u = w . Since γ 2 < 0 < γ 1 , it follo ws lim u →∞ x u /u γ 1 = 0 . Therefore, given ρ 2 < ρ 1 , Assumption (A1) yields g ∗ 1 ( 0 , w ) = lim u →∞ g ( x u , z u ) u ρ 1 = lim u →∞ g ( x u , z u ) u ρ 2 · u ρ 2 u ρ 1 = 0 . (60) Because w ∈ Ξ was c hosen arbitrarily , this establishes g ∗ 1 ( 0 , w ) = 0 , ∀ w ∈ Ξ . (61) Next, we demonstrate this leads to a con tradiction. Choose an y y 1 ∈ X ∞ γ 1 . By Assumption (A4), there exists a w 1 ∈ Ξ suc h that g ∗ 1 ( y 1 , w 1 ) > 0 . By Lemma 6.9, g ∗ 1 ( · , w ) is con vex. W e apply the definition of con vexit y . F or any t ∈ (0 , 1) , w e hav e t γ 1 ∈ (0 , 1) since γ 1 > 0 . This gives g ∗ 1 ( t γ 1 y 1 , t w 1 ) ≤ t γ 1 g ∗ 1 ( y 1 , t w 1 ) + (1 − t γ 1 ) g ∗ 1 ( 0 , t w 1 ) (62) = t γ 1 g ∗ 1 ( y 1 , t w 1 ) , (63) where the last equality follows from (61). F urthermore, since Lemma 6.7 states t ρ 1 g ∗ 1 ( y 1 , w 1 ) = g ∗ 1 ( t γ 1 y 1 , t w 1 ) , w e obtain t ρ 1 g ∗ 1 ( y 1 , w 1 ) ≤ t γ 1 g ∗ 1 ( y 1 , t w 1 ) , ∀ t ∈ (0 , 1) . (64) Giv en that g ∗ 1 is con tinuous, taking the limit as t → 0 + yields lim t → 0 + g ∗ 1 ( y 1 , t w 1 ) = g ∗ 1 ( y 1 , 0 ) < 0 , (65) where the strict negativity follo ws from Assumption (A3). This implies there exists t 0 ∈ (0 , 1) suc h that g ∗ 1 ( y 1 , t w 1 ) < 0 for all t ∈ (0 , t 0 ) . Consequently , it follows that t ρ 1 g ∗ 1 ( y 1 , w 1 ) ≤ t γ 1 g ∗ 1 ( y 1 , t w 1 ) < 0 , ∀ t ∈ (0 , t 0 ) . (66) Ho wev er, this con tradicts our initial choice of y 1 and w 1 , whic h guaran teed g ∗ 1 ( y 1 , w 1 ) > 0 and, b y extension, t ρ 1 g ∗ 1 ( y 1 , w 1 ) > 0 for all t > 0 . Case (iii) . Without loss of generality , assume γ 1 < γ 2 and ρ = ρ 1 = ρ 2 . W e divide the pro of in to t wo sub cases: (a) ρ > 0 ; (b) ρ = 0 . Case (iiia) . Fix y ∈ X ∞ γ 1 and w = 0 . By Assumption (A1), there exist { x u } ∈ X and { z u } ∈ Ξ suc h that lim u →∞ x u u γ 1 = y and lim u →∞ z u u = w = 0 (67) 31 and lim u →∞ g ( x u , z u ) u ρ = g ∗ 1 ( y , 0 ) . (68) Since γ 1 < γ 2 , w e hav e lim u →∞ x u u γ 2 = lim u →∞ x u u γ 1 · u γ 1 u γ 2 = 0 , (69) th us, by Assumption (A1), lim u →∞ g ( x u , z u ) u ρ = g ∗ 2 ( 0 , 0 ) , (70) whic h implies g ∗ 1 ( y , 0 ) = g ∗ 2 ( 0 , 0 ) . Ho wev er, this leads to a contradiction since g ∗ 1 ( y , 0 ) < 0 from Assumption (A3) and g ∗ 2 ( 0 , 0 ) = 0 from Lemma 6.8(i). Case (iiib) . Assumption (A4) ensures that there exist y 0 ∈ X ∞ γ 1 \ { 0 } and w 0 ∈ Ξ suc h that g ∗ 1 ( y 0 , w 0 ) > 0 . With the exactly same argument as Case (iiia), we deriv e g ∗ 1 ( y 0 , w 0 ) = g ∗ 2 ( 0 , w 0 ) . Ho wev er, Lemma 6.8(ii) states that g ∗ 2 ( 0 , w 0 ) < 0 , con tradicting our selection that g ∗ 1 ( y 0 , w 0 ) > 0 . 6.4 Large Deviation Principle for Chance Constrain ts This section establishes an LDP that characterizes the asymptotic b eha vior of chance constraints. Building on the primitiv e LDP of Lemma 6.6 and the prop erties of g ∗ from Section 6.2, w e define a rate function I ( y ) that captures the exp onen tial decay rate of the violation probabilit y . The main result of this section is Prop osition 6.13, whic h pro vides matching upper and low er b ounds on this deca y . This serves as theoretical foundations for proving Theorem 3.3. Definition 6.10. F or ev ery y ∈ X ∞ γ , the rate function is defined as I ( y ) : = inf w ∈ Ξ { λ ( w ) : g ∗ ( y , w ) ≥ 0 } . (71) F or a giv en { x u } ⊂ X , rate function quan tifies the asymptotic deca y rate of the violation probabilit y P ξ ( { z : g ( x u , z ) > 0 } ) as u → ∞ . W e b egin by establishing k ey prop erties of g ∗ and I ( y ) . Lemma 6.11. L et K ⊂ X ∞ γ \ { 0 } b e any non-empty c omp act set. Then, ther e exists a c onstant δ > 0 such that, for al l y ∈ K and for al l w ∈ Ξ with ∥ w ∥ < δ , g ∗ ( y , w ) < 0 . Pr o of. W e pro ve by con tradiction. Supp ose, for any δ > 0 , there exist some y ∈ K and w ∈ Ξ with ∥ w ∥ < δ suc h that g ∗ ( y , w ) ≥ 0 . Let us construct a sequence by choosing δ k = 1 /k , k ∈ N . F or each k , there exists a pair ( y k , w k ) suc h that y k ∈ K , ∥ w k ∥ < 1 /k and g ∗ ( y k , w k ) ≥ 0 . Since K is compact, there exists a subsequence of { y k } that con verge s to a limit within K . F or notational simplicity , w e denote this conv ergent subsequence by { y k } as well, and its limit by y 0 ∈ K . F or { w k } , the condition ∥ w k ∥ < 1 /k implies that w k → 0 as k → ∞ . Since g ∗ is con tinuous from Assumption (A1), we ha ve lim k →∞ g ∗ ( y k , w k ) = g ∗ lim k →∞ y k , lim k →∞ w k = g ∗ ( y 0 , 0 ) . Since g ∗ ( y k , w k ) ≥ 0 for all k , g ∗ ( y 0 , 0 ) ≥ 0 . How ever, this is a contradiction of (A3). 32 Lemma 6.12. L et K ⊂ X ∞ γ \ { 0 } b e any non-empty c omp act set. Then, ther e exists c onstant c > 0 and M > 0 such that c ≤ I ( y ) ≤ M , for al l y ∈ K . Pr o of. W e b egin b y establishing the lo wer b ound. F rom Lemma 6.11, there exists a δ > 0 such that w ∈ Ξ with ∥ w ∥ < δ implies g ∗ ( y , w ) < 0 , for all y ∈ K . In other words, for all y ∈ K , { w ∈ Ξ : g ∗ ( y , w ) ≥ 0 } ⊆ { w ∈ Ξ : ∥ w ∥ ≥ δ } . Hence, I ( y ) ≥ inf w ∈ Ξ { λ ( w ) : ∥ w ∥ ≥ δ } . Set c : = inf w ∈ Ξ { λ ( w ) : ∥ w ∥ ≥ δ } . Since λ is p ositiv ely homogeneous of degree α > 0 from Lemma 6.4 and ∥ w ∥ α ≥ δ α for an y w with ∥ w ∥ ≥ δ , c = inf w ∈ Ξ , ∥ w ∥≥ δ λ ∥ w ∥ · w ∥ w ∥ = inf w ∈ Ξ , ∥ w ∥≥ δ ∥ w ∥ α · λ w ∥ w ∥ ≥ δ α inf w ∈ Ξ ∩{ w ′ : ∥ w ′ ∥ =1 } λ ( w ) . By Assumption 2.3, λ ( w ) > 0 for all w on the unit sphere within Ξ . Since λ is con tinuous and the set Ξ ∩ { w ′ : ∥ w ′ ∥ = 1 } is compact, the minimum of λ on this set is attained and is strictly p ositiv e. As δ > 0 and α > 0 , w e conclude that c > 0 . Next, w e deriv e the upp er bound. F or an y y , Assumption (A4) ensures the existence of a p oin t w y ∈ Ξ such that g ∗ ( y , w y ) > 0 . F urthermore, since g ∗ is contin uous b y Assumption (A1), there exists an op en neighborho o d N y of y such that g ∗ ( y ′ , w y ) > 0 , ∀ y ′ ∈ N y . (72) Since K is compact, there exists a finite set { y 1 , . . . , y m } such that K ⊆ ∪ m i =1 N y i . Let us define the finite constan t M : = max i =1 ,...,m λ ( w y i ) . Cho ose any y ∈ K . There m ust exist an index j ∈ { 1 , . . . , m } such that y ∈ N y j . Th us, it follo ws that I ( y ) = inf w ∈ Ξ { λ ( w ) : g ∗ ( y , w ) ≥ 0 } ≤ λ ( w y j ) ≤ M . (73) Prop osition 6.13. L et { y u } b e any se quenc e such that u γ y u ∈ X for al l u , and lim u →∞ y u = y 0 ∈ X ∞ γ . Then, the fol lowing limits hold: lim inf u →∞ log P ξ ( { z : g ( u γ y u , z ) > 0 } ) − q ( u ) ≥ I ( y 0 ) , (74) and lim sup u →∞ log P ξ ( { z : g ( u γ y u , z ) > 0 } ) − q ( u ) ≤ I ( y 0 ) . (75) Pr o of. F rom Lemma 6.6, for every closed set C ⊆ R m , lim sup u →∞ log P ξ ( { z : z /u ∈ C } ) q ( u ) ≤ − inf w ∈C λ ( w ) , (76) 33 and for ev ery op en set O ⊆ R m , lim inf u →∞ log P ξ ( { z : z /u ∈ O } ) q ( u ) ≥ − inf w ∈O λ ( w ) . (77) W e define g u ( w ) : = g ( u γ y u , u w ) u ρ and g ∗ 0 ( w ) : = g ∗ ( y 0 , w ) . (78) Let { w u } ⊂ Ξ be an arbitrary conv ergent sequence. Since Ξ is closed, there exists some w 0 ∈ Ξ suc h that lim u →∞ w u = w 0 . Setting x u : = u γ y u and z u = u w u guaran tee lim u →∞ x u u γ = lim u →∞ y u = y 0 and lim u →∞ z u u = lim u →∞ w u = w 0 . (79) Th us, by Assumption (A1), we ha ve lim u →∞ g u ( w u ) = lim u →∞ g ( u γ y u , u w u ) u ρ = lim u →∞ g ( x u , z u ) u ρ = g ∗ ( y 0 , w 0 ) (80) = g ∗ 0 ( w 0 ) . (81) Since { w u } w as chosen arbitrarily , the sequence { g u } con verges contin uously to g ∗ 0 . No w, we analyze the asymptotic b ehavior of P ξ ( { z : g ( u γ y u , z ) > 0 } ) . P ξ ( { z : g ( u γ y u , z ) > 0 } ) = P ξ ( z : g u γ y u , u z u u ρ > 0 )! (82) = P ξ ( { z : g u ( z /u ) > 0 } ) (83) = P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) . (84) Lo wer b ound . F or an y M > 0 , we decomp ose the even t b y splitting R m in to the ball B M and its complemen t. Using A ⊆ ( A ∩ B M ) ∪ B c M for an y set A , the union b ound gives P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) ≤ P ξ z : z /u ∈ L M > 0 ( g u ) + P ξ z : z /u ∈ B c M . (85) By Lemma 6.3(i) (applied with a = 0 and the con tinuous conv ergence of { g u } to g ∗ 0 ), for any δ > 0 and all u sufficiently large, L M > 0 ( g u ) ⊆ L ≥ 0 ( g ∗ 0 ) + B δ . Substituting this con tainment into the first term ab o v e and using log ( A + B ) ≤ max { log 2 A, log 2 B } for A, B > 0 , w e obtain log P ξ z : z /u ∈ L M > 0 ( g u ) + P ξ z : z /u ∈ B c M ≤ max log 2 P ξ ( { z : z /u ∈ L ≥ 0 ( g ∗ 0 ) + B δ } ) , log 2 P ξ z : z /u ∈ B c M . Note that L ≥ 0 ( g ∗ 0 ) + B δ is closed, b eing the Minko wski sum of the closed set L ≥ 0 ( g ∗ 0 ) (a sup erlev el set of the contin uous function g ∗ 0 ) and the compact set B δ . Similarly , B c M is closed. Thus, applying the primitiv e LDP upp er b ound (76) to each term, dividing b y q ( u ) , and noting log 2 /q ( u ) → 0 , 34 giv es lim sup u →∞ log P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) q ( u ) ≤ max lim sup u →∞ log P ξ ( { z : z /u ∈ L ≥ 0 ( g ∗ 0 ) + B δ } ) q ( u ) , lim sup u →∞ log P ξ z : z /u ∈ B c M q ( u ) ) ≤ − min inf w ∈L ≥ 0 ( g ∗ 0 )+ B δ λ ( w ) , inf ∥ w ∥≥ M λ ( w ) . Note that λ ( w ) → ∞ as ∥ w ∥ → ∞ since λ is p ositiv ely homogeneous of degree α > 0 (Lemma 6.4) and strictly p ositiv e on the unit sphere (Assumption 2.3). Since λ is co erciv e, inf ∥ w ∥≥ M λ ( w ) → ∞ as M → ∞ , so the second term in the minim um b ecomes negligible. F or the first term, taking δ → 0 yields inf w ∈L ≥ 0 ( g ∗ 0 )+ B δ λ ( w ) → inf w ∈L ≥ 0 ( g ∗ 0 ) λ ( w ) by con tinuit y of λ . Therefore, taking M → ∞ and δ → 0 , w e obtain lim sup u →∞ log P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) q ( u ) ≤ − inf w ∈L ≥ 0 ( g ∗ 0 ) λ ( w ) = − I ( y 0 ) . (86) Multiplying b oth sides by − 1 and applying the identit y − lim sup u →∞ a u = lim inf u →∞ ( − a u ) , w e conclude that lim inf u →∞ log P ξ ( { z : g ( u γ y u , z ) > 0 } ) − q ( u ) ≥ I ( y 0 ) . (87) Upp er b ound . F or an y δ > 0 and M > 0 , the strict sup erlev el set L >δ ( g ∗ 0 ) = { w : g ∗ 0 ( w ) > δ } is op en since it is the preimage of the op en set ( δ, ∞ ) under the con tinuous function g ∗ 0 . Its intersection with the op en ball ( B M ) ◦ is therefore also op en. Applying the primitive LDP lo w er b ound (77) yields lim inf u →∞ log P ξ ( { z : z /u ∈ L >δ ( g ∗ 0 ) ∩ ( B M ) ◦ } ) q ( u ) ≥ − inf w ∈L >δ ( g ∗ 0 ) ∩ ( B M ) ◦ λ ( w ) . W e now establish the c hain of inclusions L >δ ( g ∗ 0 ) ∩ ( B M ) ◦ ⊆ L M >δ ( g ∗ 0 ) ⊆ L M > 0 ( g u ) ⊆ L > 0 ( g u ) : the first inclusion holds b ecause ( B M ) ◦ ⊆ B M ; the second follows from Lemma 6.3(ii) (applied with a = 0 , so that the a + δ sup erlev el set of g ∗ 0 is con tained in the a sup erlev el set of g u for large u ); the third is immediate since the restricted superlevel set is contained in the unrestricted one. By monotonicit y of probability , we therefore hav e lim inf u →∞ log P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) q ( u ) ≥ − inf w ∈L >δ ( g ∗ 0 ) ∩ ( B M ) ◦ λ ( w ) . As M → ∞ , the constrain t w ∈ ( B M ) ◦ b ecomes v acuous, and as δ → 0 , the strict sup erlev el set L >δ ( g ∗ 0 ) exhausts L > 0 ( g ∗ 0 ) . Therefore, lim inf u →∞ log P ξ ( { z : z /u ∈ L > 0 ( g u ) } ) q ( u ) ≥ − inf w ∈L > 0 ( g ∗ 0 ) λ ( w ) = − inf w ∈L ≥ 0 ( g ∗ 0 ) λ ( w ) = − I ( y 0 ) . (88) 35 The second equality holds b ecause the infimum of the contin uous function λ o ver L > 0 ( g ∗ 0 ) equals that o v er its closure L ≥ 0 ( g ∗ 0 ) ; indeed, for any w ∈ L ≥ 0 ( g ∗ 0 ) , there exist w k ∈ L > 0 ( g ∗ 0 ) with w k → w , and contin uit y of λ giv es λ ( w k ) → λ ( w ) . The last equalit y follows from the definition of I ( y 0 ) . Multiplying both sides by − 1 and applying the identit y − lim inf u →∞ a u = lim sup u →∞ ( − a u ) yields lim sup u →∞ log P ξ ( { z : g ( u γ y u , z ) > 0 } ) − q ( u ) ≤ I ( y 0 ) . (89) 6.5 Pro of of Theorem 3.3 Pr o of. Let u ε : = q ← (log 1 /ε ) , where q ← ( t ) : = inf { u : q ( u ) ≥ t } is the generalized in verse of q . Since q ∈ R V ( α ) with α > 0 , Bingham et al. [7, Theorem 1.5.12] guarantees that q ( u ε ) ∼ log(1 /ε ) as ε → 0 , where f ( ε ) ∼ g ( ε ) denotes lim ε → 0 f ( ε ) /g ( ε ) = 1 . W e define y ε : = x ε ( u ε ) γ . (90) The rescaled sequence { y ε } captures the growth of x ε relativ e to the natural scale u γ ε , enabling us to apply the large deviation principles dev elop ed in Section 6.4. W e divide the pro of into tw o exhaustive cases: (i) The sequence { y ε } is b ounded; (ii) The sequence { y ε } is un b ounded. Case (i) . Boundedness implies that the sequence { y ε } is contained in a compact set. W e separate this case in to tw o sub cases: (a) The sequence is ev entually bounded aw ay from the origin i.e., lim inf ε → 0 ∥ y ε ∥ > 0 ; (b) The sequence admits a subsequence conv erging to the origin i.e., lim inf ε → 0 ∥ y ε ∥ = 0 . Case (ia) . Since { y ε } is even tually b ounded a wa y from the origin, any limit p oin t y 0 satisfies y 0 = 0 . By the definition of a limit p oin t, there exists a sequence { ε k } ⊂ (0 , 1) conv erging to 0 as k → ∞ such that the corresponding subsequence { y ε k } conv erges to y 0 . Moreov er, it follo ws that y 0 ∈ X ∞ γ b y Definition 2.5. Indeed, the sequence x ε k ∈ X paired with u ε k → ∞ satisfies x ε k /u γ ε k = y ε k → y 0 , whic h witnesses the membership. Hence, the set of limit p oin ts of { y ε } forms a nonempt y compact subset K ⊂ X ∞ γ \ { 0 } . W e now analyze the asymptotic b eha vior of the constrain t violation probability . Define I u ( y ) : = log P ξ ( { z : g ( u γ y , z ) > 0 } ) − q ( u ) , (91) whic h is well-defined for an y y satisfying u γ y ∈ X . In particular, I u ε ( y ε ) is w ell-defined b ecause u γ ε y ε = x ε ∈ X . With this notation, we can write log V ( x ε ) log ε = log P ξ ( { z : g ( u γ ε y ε , z ) > 0 } ) − log(1 /ε ) = − q ( u ε ) − log(1 /ε ) · I u ε ( y ε ) . (92) 36 Since q ( u ε ) ∼ log(1 /ε ) by the definition of u ε , taking the limit inferior as ε → 0 on b oth sides simplifies the asymptotic relationship to lim inf ε → 0 log V ( x ε ) log ε = lim inf ε → 0 I u ε ( y ε ) . (93) F or any con vergen t subsequence { y ε k } → y 0 , b y Prop osition 6.13, it follo ws that lim inf k →∞ I u ε k ( y ε k ) ≥ I ( y 0 ) . (94) Since this holds for ev ery conv ergen t subsequence of the b ounded sequence { y ε } , w e obtain lim inf ε → 0 I u ε ( y ε ) ≥ inf y ∈K I ( y ) . (95) T ogether with (93), w e obtain lim inf ε → 0 log V ( x ε ) log ε ≥ inf y ∈K I ( y ) . (96) W e sho w inf y ∈K I ( y ) ≥ 1 by establishing a low er b ound I ( y ) ≥ 1 for every y ∈ K . Choose any con vergen t subsequence { y ε k } → y 0 of { y ε } . Then, Prop osition 6.13 gives I ( y 0 ) ≥ lim sup k →∞ log P ξ ( { z : g (( u ε k /s ) γ y ε k , z ) > 0 } ) − q ( u ε k /s ) (97) Since x ε k solv es (SSP N ,s ), Theorem 3.2 guaran tees that, with probability 1 − β , P ξ ( { z : g (( u ε k /s ) γ y ε k , z ) > 0 } ) ≤ ε s − α k . (98) T aking logarithm and m ultiplying by -1 on b oth sides yields − log P ξ ( { z : g (( u ε k /s ) γ y ε k , z ) > 0 } ) ≥ s − α log(1 /ε k ) . (99) Com bining with (97), we hav e I ( y 0 ) ≥ lim sup k →∞ s − α log(1 /ε k ) q ( u ε k /s ) = lim sup k →∞ s − α log(1 /ε k ) q ( u ε k /s ) · q ( u ε k ) q ( u ε k ) = 1 , (100) where the last equalit y follows since q ( u ε k ) ∼ log (1 /ε k ) and q ( u ε k /s ) ∼ s − α q ( u ε k ) due to the regular v ariation prop ert y q ∈ R V ( α ) . Since y 0 ∈ K w as arbitrary , it follo ws that inf y ∈K I ( y ) ≥ 1 , (101) whic h prov es the claimed result. Case (ib) . Any subsequence of { y ε } that do es not con verge to 0 is cov ered by Case (ia). Th us, it suffices to consider the case where the sequence con verges to the origin. By passing to a subsequence if necessary , we assume that lim ε → 0 y ε = 0 . When y ε → 0 , the rate function I is ev aluated at the origin, where its v alue dep ends on whether ρ > 0 or ρ = 0 since these reflect qualitatively different asymptotic b eha vior of g ∗ at 0 . W e analyze the asymptotic b eha vior based on the v alue of ρ : (1) ρ > 0 ; 37 (2) ρ = 0 . Case (ib1) . W e show by con tradiction that this case cannot o ccur. Since x ε solv es (SSP N ,s ), Theorem 3.2 guarantees that, with probabilit y at least 1 − β , the violation probabilit y decays at least as fast as ε s − α . Noting that x ε /s γ = u γ ε y ε /s γ , this translates to lim inf ε → 0 log P ξ ( { z : g ( u γ ε y ε /s γ , z ) > 0 } ) − log(1 /ε ) ≥ s − α > 0 . (102) Since q ( u ε ) ∼ log (1 /ε ) , it follows lim inf ε → 0 log P ξ ( { z : g ( u γ ε y ε /s γ , z ) > 0 } ) − q ( u ε ) ≥ s − α > 0 . (103) This b ound says the log-probabilit y v anishes at rate at least s − α relativ e to − q ( u ε ) . On the other hand, w e deriv e a conflicting upp er bound. Since the sequence { y ε /s γ } con verges to 0 ∈ X ∞ γ , the upp er b ound (75) of Prop osition 6.13 yields lim sup ε → 0 log P ξ ( { z : g ( u γ ε y ε /s γ , z ) > 0 } ) − q ( u ε ) ≤ I ( 0 ) = inf w : g ∗ ( 0 , w ) ≥ 0 λ ( w ) . (104) F or ρ > 0 , Lemma 6.8(i) implies g ∗ ( 0 , 0 ) = 0 , so 0 b elongs to the feasible set { w : g ∗ ( 0 , w ) ≥ 0 } . F urthermore, since λ ( 0 ) = 0 by Lemma 6.5 and λ is nonnegative, the infimum is attained at 0 , giving I ( 0 ) = 0 . This means the log-probability do es not deca y at all relativ e to − q ( u ε ) , which con tradicts (103). Case (ib2) . In contrast to the previous sub case, when ρ = 0 the constrain t is “infinitely safe” at the origin. By Lemma 6.8(ii), we hav e g ∗ ( 0 , w ) < 0 for all w ∈ Ξ . Consequently , the set { w : g ∗ ( 0 , w ) ≥ 0 } is empt y . A dopting the con ven tion that the infim um o ver an empt y set is + ∞ , w e hav e I ( 0 ) = ∞ , meaning the violation probabilit y deca ys sup er-exp onen tially at the origin. Applying the lo wer b ound (74) of Prop osition 6.13 to the sequence { y ε } → 0 yields: lim inf ε → 0 log P ξ ( { z : g ( u γ ε y ε , z ) > 0 } ) − q ( u ε ) ≥ I ( 0 ) = + ∞ . (105) Giv en that q ( u ε ) ∼ log (1 /ε ) , we obtain lim inf ε → 0 log V ( x ε ) log ε = lim inf ε → 0 log P ξ ( { z : g ( u γ ε y ε , z ) > 0 } ) − log(1 /ε ) = lim inf ε → 0 log P ξ ( { z : g ( u γ ε y ε , z ) > 0 } ) − q ( u ε ) ≥ + ∞ . (106) This completes the pro of for the b ounded case. Case (ii) . Since any bounded subsequence of { y ε } is cov ered by Case (i), we assume without loss of generalit y that ∥ y ε ∥ → ∞ as ε → 0 . This means x ε gro ws faster than the c haracteristic scale u γ ε . Define the normalized vectors d ε : = y ε / ∥ y ε ∥ , which capture the asymptotic direction of y ε and lie on the unit sphere. The pro of considers tw o sub cases: 38 (a) γ > 0 ; (b) γ < 0 . Case (iia) . The k ey idea is to separate the magnitude ∥ y ε ∥ from the direction d ε b y writing the decision v ector as x ε = u γ ε y ε = ( u ε ∥ y ε ∥ 1 /γ ) γ d ε . This representation identifies ˜ u ε : = u ε ∥ y ε ∥ 1 /γ as the effective scale, enabling us to apply the large deviation machinery with scale ˜ u ε and direction d ε . W e analyze the asymptotic deca y of the violation probabilit y by factoring the ratio through this effectiv e scale: lim inf ε → 0 log V ( x ε ) log ε = lim inf ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − log(1 /ε ) = lim inf ε → 0 q ( u ε ) log(1 /ε ) · q ( ˜ u ε ) q ( u ε ) · log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) . W e ev aluate the limit inferior of eac h factored term as ε → 0 . The first term conv erges to 1 since q ( u ε ) ∼ log (1 /ε ) by the definition of u ε . F or the remaining terms, observ e that ∥ y ε ∥ 1 /γ → ∞ b ecause ∥ y ε ∥ → ∞ and γ > 0 , so the effective scale ˜ u ε = u ε ∥ y ε ∥ 1 /γ → ∞ . Since q ∈ R V ( α ) with α > 0 and the m ultiplier ∥ y ε ∥ 1 /γ → ∞ , the ratio q ( ˜ u ε ) /q ( u ε ) → + ∞ by the definition of regular v ariation. Finally , we show the LDP ratio in the last term is strictly p ositive. By the definition of limit inferior, there exists a subsequence { d ε k } of { d ε } suc h that lim inf ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) = lim k →∞ log P ξ ( { z : g (( ˜ u ε k ) γ d ε k , z ) > 0 } ) − q ( ˜ u ε k ) . F urthermore, since { d ε k } lies on a compact unit sphere, there exists a further subsequence { d ε k ℓ } of { d ε k } con verging to some limit p oint d 0 . This implies, by Proposition 6.13, lim inf ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) = lim ℓ →∞ log P ξ ( { z : g (( ˜ u ε k ℓ ) γ d ε k ℓ , z ) > 0 } ) − q ( ˜ u ε k ℓ ) ≥ lim inf ℓ →∞ log P ξ ( { z : g (( ˜ u ε k ℓ ) γ d ε k ℓ , z ) > 0 } ) − q ( ˜ u ε k ℓ ) ≥ I ( d 0 ) . (107) By Lemma 6.12, I ( d 0 ) > 0 holds since d 0 is a non-zero elemen t lying on the unit sphere of X ∞ γ . Consequen tly , we ha v e a pro duct of three t erms where the first con v erges to 1 , the second div erges to + ∞ , and the limit inferior of the third is strictly positive. Given all three terms are nonnegativ e, we can apply the sup er-multiplicativit y of the limit inferior to obtain lim inf ε → 0 log V ( x ε ) log ε ≥ lim inf ε → 0 q ( u ε ) log(1 /ε ) · lim inf ε → 0 q ( ˜ u ε ) q ( u ε ) · lim inf ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) = + ∞ , (108) establishing the feasibilit y guarantee. Case (iib) . W e show by con tradiction that this case cannot occur. Since u ε ∥ y ε ∥ 1 /γ ≥ 0 , by passing to a subsequence if necessary , w e may assume that lim ε → 0 u ε ∥ y ε ∥ 1 /γ exists in [0 , + ∞ ] . Thus, w e consider the follo wing three exhaustive sub cases: 39 (1) lim ε → 0 u ε ∥ y ε ∥ 1 /γ = + ∞ ; (2) lim ε → 0 u ε ∥ y ε ∥ 1 /γ = L ∈ (0 , + ∞ ) ; (3) lim ε → 0 u ε ∥ y ε ∥ 1 /γ = 0 . Case (iib1) . W e sho w b y contradiction that the violation probability deca ys to o slowly to b e consis- ten t with the feasibility guaran tee of the scenario approac h. Since x ε solv es (SSP N ,s ), Theorem 3.2 yields lim sup ε → 0 log V ( x ε ) log ε ≥ s − α > 0 . (109) W e derive an upp er b ound on the left-hand side that contradicts this inequality . As in Case (iia), w e use the effective scale ˜ u ε = u ε ∥ y ε ∥ 1 /γ and write x ε = ( ˜ u ε ) γ d ε , so that the limit sup erior of the ratio log V ( x ε ) / log ε can be decomp osed as follows: lim sup ε → 0 log V ( x ε ) log ε = lim sup ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − log(1 /ε ) = lim sup ε → 0 q ( u ε ) log(1 /ε ) · q ( ˜ u ε ) q ( u ε ) · log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) . Next, w e analyze the asymptotic b eha vior of eac h factor in turn. The first term con verges to 1. F or the second term, since γ < 0 and ∥ y ε ∥ → ∞ , the multiplier ∥ y ε ∥ 1 /γ → 0 . Since q ∈ R V ( α ) with α > 0 , applying Definition 2.1 yields q ( ˜ u ε ) /q ( u ε ) → 0 . Regarding the third term, we establish that its limit superior is bounded abov e b y a finite constan t. By the definition of limit sup erior, there exists a subsequence { d ε k } of { d ε } suc h that lim sup ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) = lim k →∞ log P ξ ( { z : g (( ˜ u ε k ) γ d ε k , z ) > 0 } ) − q ( ˜ u ε k ) . Because the normalized vectors { d ε k } reside on a compact unit sphere, w e can extract a further subsequence { d ε k ℓ } of { d ε k } conv erging to some limit p oin t d 0 . Applying Prop osition 6.13 to this sequence yields lim sup ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) = lim ℓ →∞ log P ξ ( { z : g (( ˜ u ε k ℓ ) γ d ε k ℓ , z ) > 0 } ) − q ( ˜ u ε k ℓ ) ≤ lim sup ℓ →∞ log P ξ ( { z : g (( ˜ u ε k ℓ ) γ d ε k ℓ , z ) > 0 } ) − q ( ˜ u ε k ℓ ) ≤ I ( d 0 ) . (110) Lemma 6.12 guaran tees that I ( d 0 ) is b ounded abov e b y a finite constan t, as d 0 is a non-zero element on the unit sphere of X ∞ γ . W e hav e thus decomp osed the original ratio in to a pro duct of three terms: the first comp onen t con verges to 1 , the second conv erges to 0 , and the limit sup erior of the last is b ounded ab o v e by a constan t. Because all three factors are nonnegativ e, we apply the sub-multiplicativit y of the limit sup erior to obtain lim sup ε → 0 log V ( x ε ) log ε ≤ lim sup ε → 0 q ( u ε ) log(1 /ε ) · lim sup ε → 0 q ( ˜ u ε ) q ( u ε ) · lim sup ε → 0 log P ξ ( { z : g (( ˜ u ε ) γ d ε , z ) > 0 } ) − q ( ˜ u ε ) =0 , (111) 40 whic h contradicts (109). Case (iib2) . Since the sequence of normalized vectors { d ε } lies on the compact unit sphere, by passing to a further subsequence if necessary , we assume that lim ε → 0 d ε = ¯ d for some unit vector ¯ d . Recalling the effective scale ˜ u ε = u ε ∥ y ε ∥ 1 /γ → L from the hypothesis of this sub case, w e hav e x ε = ( ˜ u ε ) γ d ε → L γ ¯ d . Since X is closed, L γ ¯ d ∈ X . The strategy is to leverage the scenario feasibilit y guaran tee to show that g is nonp ositiv e along a limiting direction, then extend this via conv exity and the asymptotic homogeneit y (A1) to contradict Assumption (A4). W e define the limiting violation set V : = z : g ( L/s ) γ ¯ d , z > 0 . Consider an arbitrary p oin t z ∈ V . Since ( ˜ u ε /s ) γ d ε → ( L/s ) γ ¯ d and g ( · , z ) is closed (i.e., lo wer semicon tinuous) for each fixed z , w e hav e lim inf ε → 0 g ˜ u ε s γ d ε , z ≥ g ( L/s ) γ ¯ d , z > 0 . This implies that for all sufficien tly small ε , z b elongs to the ev ent V ε : = z : g ˜ u ε s γ d ε , z > 0 . Consequen tly , w e hav e the inclusion V ⊆ lim inf ε → 0 V ε . 4 In voking F atou’s lemma, w e obtain P ξ ( V ) ≤ lim inf ε → 0 P ξ ( V ε ) . (112) On the other hand, observing that ( ˜ u ε /s ) γ d ε = ( ˜ u ε ) γ d ε /s γ = x ε /s γ , w e recognize V ε = { z : g ( x ε /s γ , z ) > 0 } as the violation ev ent of the scaled solution. Since x ε solv es (SSP N ,s ) with N ≥ N ( ε s − α , β ) samples, Theorem 3.2 ensures that P ξ ( V ε ) ≤ ε s − α . T aking the limit as ε → 0 in (112), it follows that P ξ ( V ) = 0 . Since g (( L/s ) γ ¯ d , · ) is con tinuous, the set V is open (as the preimage of (0 , ∞ ) under a con tinuous function). An open subset of the supp ort Ξ that carries zero probabilit y must b e empty , so V ∩ Ξ = ∅ , whic h gives g (( L/s ) γ ¯ d , z ) ≤ 0 , ∀ z ∈ Ξ . Let ˆ y : = ( L/s ) γ ¯ d . W e now extend the nonp ositivit y of g from the fixed p oin t ˆ y to all p oin ts { u γ ˆ y : u ≥ 1 } , whic h trace a path from ˆ y tow ard 0 (as u → ∞ , since γ < 0 ). Since γ < 0 , Assumption (A5) guarantees that g ( 0 , z ) ≤ 0 for all z ∈ Ξ . F or an y u ≥ 1 , u γ ∈ (0 , 1] since γ < 0 , so u γ ˆ y = u γ ˆ y + (1 − u γ ) 0 is a con vex combination of ˆ y and 0 . Because g ( · , z ) is conv ex, g ( u γ ˆ y , z ) ≤ u γ g ( ˆ y , z ) + (1 − u γ ) g ( 0 , z ) ≤ max { g ( ˆ y , z ) , g ( 0 , z ) } ≤ 0 , ∀ z ∈ Ξ . Finally , fix an y w ∈ Ξ . Setting x u : = u γ ˆ y , we hav e x u /u γ = ˆ y and, for all u ≥ s , x u = ( uL/s ) γ ¯ d lies on the segment [ 0 , L γ ¯ d ] ⊂ X (by closedness and con vexit y), so x u ∈ X . By Assumption (A1), for an y sequence { z u } ⊂ Ξ suc h that lim u →∞ z u /u = w , w e obtain g ∗ ( ˆ y , w ) = lim u →∞ g ( u γ ˆ y , z u ) u ρ ≤ 0 . (113) 4 The limit inferior of a sequence of subsets {A u } is defined as lim inf u →∞ A u : = { z ∈ R m : lim inf u →∞ 1 A u ( z ) = 1 } . Here, for a giv en set A ⊂ R m , 1 A ( z ) is an indicator function which equals 1 if z ∈ A , and 0 otherwise. 41 The inequalit y holds because g ( u γ ˆ y , z u ) ≤ 0 for all sufficien tly large u (since z u ∈ Ξ ) while u ρ > 0 . Since this holds for all w ∈ Ξ , we conclude that { w ∈ Ξ : g ∗ ( ˆ y , w ) > 0 } = ∅ . Moreo ver, ˆ y ∈ X ∞ γ \ { 0 } : the sequence x u = u γ ˆ y ∈ X with u → ∞ satisfies x u /u γ = ˆ y , witnessing membership via Definition 2.5, and ˆ y = 0 since L > 0 . This contradicts Assumption (A4). Case (iib3) . Unlik e Case (iib2), where ˜ u ε → L > 0 ensures that x ε = ( ˜ u ε ) γ d ε con verges to the finite limit L γ ¯ d ∈ X , here ˜ u ε → 0 with γ < 0 forces ∥ x ε ∥ = ( ˜ u ε ) γ → ∞ , so there is no finite limiting p oin t. W e instead exploit the div ergence of ∥ x ε ∥ directly . Recall from Case (ii) that d ε = y ε / ∥ y ε ∥ . Since x ε = u γ ε y ε with u γ ε > 0 , w e hav e x ε / ∥ x ε ∥ = d ε . By passing to a further subsequence if necessary as in Case (iib2), we may assume that d ε → ¯ d for some unit v ector ¯ d . Fix any t > 0 . Since γ < 0 and s ≥ 1 , the scaled solution satisfies ∥ x ε /s γ ∥ = ∥ x ε ∥ · s − γ → ∞ , so ∥ x ε /s γ ∥ > t for all sufficien tly small ε > 0 . Noting that x ε /s γ has the same direction as x ε , namely d ε , w e write t d ε as a con vex combination of 0 and x ε /s γ : t d ε = 1 − t ∥ x ε /s γ ∥ 0 + t ∥ x ε /s γ ∥ · x ε s γ . By the con vexit y of g ( · , z ) and Assumption (A5) (which gives g ( 0 , z ) ≤ 0 ), w e obtain g ( t d ε , z ) ≤ 1 − t ∥ x ε /s γ ∥ g ( 0 , z ) + t ∥ x ε /s γ ∥ g x ε s γ , z ≤ t ∥ x ε /s γ ∥ g x ε s γ , z . (114) Since t/ ∥ x ε /s γ ∥ > 0 , this inequalit y implies the set inclusion { z : g ( t d ε , z ) > 0 } ⊆ n z : g x ε s γ , z > 0 o . (115) W e no w show g ( t ¯ d , z ) ≤ 0 for all z ∈ Ξ , pro ceeding analogously to Case (iib2). Consider an arbitrary z ∈ Ξ with g ( t ¯ d , z ) > 0 . Since d ε → ¯ d and g ( · , z ) is closed (i.e., low er semicontin uous), lim inf ε → 0 g ( t d ε , z ) ≥ g ( t ¯ d , z ) > 0 . Hence g ( t d ε , z ) > 0 for all sufficien tly small ε , so it follows z ∈ lim inf ε → 0 { z ′ : g ( t d ε , z ′ ) > 0 } . In voking F atou’s lemma and the set inclusion (115): P ξ z : g ( t ¯ d , z ) > 0 ≤ lim inf ε → 0 P ξ ( { z : g ( t d ε , z ) > 0 } ) ≤ lim inf ε → 0 P ξ n z : g x ε s γ , z > 0 o ≤ lim inf ε → 0 ε s − α = 0 . (116) The first inequalit y combines the low er semicontin uity of g ( · , z ) and F atou’s lemma (exactly as in Case (iib2)), the second uses the set inclusion (115), and the third follows from Theorem 3.2 since x ε solv es (SSP N ,s ) with N ≥ N ( ε s − α , β ) samples. Since g ( t ¯ d , · ) is con tinuous, the set { z : g ( t ¯ d , z ) > 0 } is op en; as b efore, an op en subset of Ξ with zero probabilit y must b e empty , so g ( t ¯ d , z ) ≤ 0 , ∀ z ∈ Ξ . (117) As t > 0 w as chosen arbitrarily , this nonp ositivit y holds for all t > 0 . 42 Finally , we derive the desired con tradiction via Assumption (A1), follo wing the same structure as Case (iib2). Fix any w ∈ Ξ and c ho ose a sequence { z u } ⊂ Ξ with lim u →∞ z u /u = w . Setting x u : = u γ ¯ d , we ha ve x u /u γ = ¯ d , so the sequences { x u } and { z u } satisfy (6) with limit ( ¯ d , w ) . Moreo ver, x u ∈ X for all u > 0 : for any c > 0 and ∥ x ε ∥ > c , the p oint c d ε = ( c/ ∥ x ε ∥ ) x ε + (1 − c/ ∥ x ε ∥ ) 0 is a con vex combination of x ε ∈ X and 0 ∈ X , so c d ε ∈ X ; taking ε → 0 and using the closedness of X giv es c ¯ d ∈ X for all c > 0 . Assumption (A1) then yields g ∗ ( ¯ d , w ) = lim u →∞ g ( u γ ¯ d , z u ) u ρ ≤ 0 , (118) where the inequality holds b ecause g ( u γ ¯ d , z u ) ≤ 0 for all u > 0 . Since this holds for all w ∈ Ξ , { w ∈ Ξ : g ∗ ( ¯ d , w ) > 0 } = ∅ . Moreo ver, ¯ d ∈ X ∞ γ \ { 0 } : to see this, note that the sequence x u = u γ ¯ d ∈ X with λ u = u → ∞ satisfies x u /λ γ u = ¯ d , witnessing membership in X ∞ γ via Definition 2.5, and ¯ d = 0 since it is a unit vector. This contradicts Assumption (A4). 6.6 Pro ofs of Section 4 Pr o of of Pr op osition 4.3. Let y ∈ X ∞ γ and w ∈ Ξ b e arbitrary . Consider any pair of sequences { x u } ⊂ X and { z u } ⊂ Ξ satisfying (6). W e introduce the sequences y u := x u /u γ and w u := z u /u . By construction, these sequences con verge to y and w , resp ectiv ely , as u → ∞ . Substituting x u = u γ y u and z u = u w u in to the expression for g ( x u , z u ) giv en in (14) for (7), we obtain g ( x u , z u ) u ρ = X ( a , b ) ∈J C a , b u p a , b ( γ ) − ρ y a u w b u , (119) where p a , b ( γ ) is the exp onen t of each term, defined as (17). Algorithm 1 determines ρ as the maximum scaling exp onen t, i.e., ρ = max ( a , b ) ∈J p a , b ( γ ) ; see line 12 in Algorithm 1. Consequen tly , for ev ery index ( a , b ) ∈ J , the exponent of u satisfies p a , b ( γ ) − ρ ≤ 0 . W e partition the index set J into the active set J ∗ := { ( a , b ) ∈ J : p a , b ( γ ) = ρ } and the inactiv e set J \ J ∗ . The limit in (7) then can b e analyzed as follows: lim u →∞ g ( x u , z u ) u ρ = X ( a , b ) ∈J ∗ C a , b lim u →∞ ( y a u w b u ) + X ( a , b ) ∈J \J ∗ C a , b lim u →∞ ( u p a , b ( γ ) − ρ y a u w b u ) . (120) Since pow er functions are contin uous, and y u → y , w u → w , it follo ws that lim u →∞ y a u w b u = y a w b . F or terms in J \ J ∗ , u p a , b ( γ ) − ρ → 0 as u → ∞ . Therefore, we hav e lim u →∞ g ( x u , z u ) u ρ = X ( a , b ) ∈J ∗ C a , b y a w b = g ∗ ( y , w ) . (121) This confirms that the conv ergence in (7) holds for any sequences satisfying (6), thereby verifying condition (A1). Finally , w e note that Algorithm 1 explicitly verifies conditions (A2) to (A5) b efore returning the tuple ( γ , ρ, g ∗ ) . Since (A1) is established by the argument ab o ve, Assumption 2.6 is satisfied. Pr o of of Pr op osition 4.8. W e demonstrate that g ( x , z ) satisfies all conditions of Assumption 2.6 with parameters ( γ , ρ + ρ h ) and the limit function g ∗ ( y , w ) = f ∗ ( y , w ) h ∗ ( y , w ) for all y ∈ X ∞ γ and w ∈ Ξ . 43 (A1) . Fix arbitrary y ∈ X ∞ γ and w ∈ Ξ . Consider an y sequences { x u } and { z u } satisfying lim u →∞ x u /u γ = y and lim u →∞ z u /u = w , resp ectiv ely . Given that both f and h satisfy condi- tion (A1) with parameters ( γ , ρ ) and ( γ , ρ h ) , resp ectiv ely , w e obtain lim u →∞ g ( x u , z u ) u ρ + ρ h = lim u →∞ f ( x u , z u ) u ρ · h ( x u , z u ) u ρ h = lim u →∞ f ( x u , z u ) u ρ · lim u →∞ h ( x u , z u ) u ρ h = f ∗ ( y , w ) · h ∗ ( y , w ) = g ∗ ( y , w ) . (122) Moreo ver, contin uit y of f ∗ and h ∗ on X ∞ γ × Ξ implies that their pro duct g ∗ is con tinuous as well. (A2) . As f and g are defined on the same domain X and share the same γ , these conditions are satisfied b y the hypothesis that f satisfies Assumption 2.6. (A3) . Fix an y y ∈ X ∞ γ \ { 0 } . Since f ∗ ( y , 0 ) < 0 by condition (A3) and h ∗ ( y , w ) > 0 for all w ∈ Ξ , w e hav e g ∗ ( y , 0 ) = f ∗ ( y , 0 ) h ∗ ( y , 0 ) < 0 . (123) (A4) Fix any y ∈ X ∞ γ \ { 0 } . Given that f satisfies condition (A4), the set { w ∈ Ξ : f ∗ ( y , w ) > 0 } is nonempt y . Let w 0 b e an element of this set. Strict p ositivit y of h ∗ ensures that g ∗ ( y , w 0 ) = h ∗ ( y , w 0 ) f ∗ ( y , w 0 ) > 0 . (124) Therefore, { w ∈ Ξ : g ∗ ( y , w ) > 0 } is nonempty . (A5) . Suppose γ < 0 . By hypothesis, f satisfies (A5), meaning f ( 0 , z ) ≤ 0 for all z ∈ Ξ . Since h is strictly p ositiv e on its domain, we hav e g ( 0 , z ) = f ( 0 , z ) h ( 0 , z ) ≤ 0 , ∀ z ∈ Ξ . (125) Pr o of of Pr op osition 4.12. W e verify that g satisfies (A1) to (A5) of Assumption 2.6. (A1) . Fix arbitrary y ∈ X ∞ γ and w ∈ Ξ . Let { x u } ⊂ X and { z u } ⊂ Ξ b e an y sequences satisfying (6). Since the limit op erator commutes with the maximum ov er a finite set, we hav e lim u →∞ g ( x u , z u ) u ρ = lim u →∞ max i =1 ,...,K g i ( x u , z u ) u ρ = max i =1 ,...,K lim u →∞ g i ( x u , z u ) u ρ . (126) By the h yp othesis that each g i satisfies (A1), the inner limits con verge to g ∗ i ( y , w ) . Th us, the limit equals g ∗ ( y , w ) , satisfying (A1). (A2) . Since the domain X remains unc hanged, this conditions are inherited directly . (A3) . By hypothesis, g ∗ i ( y , 0 ) < 0 for all i = 1 , . . . , K and an y y ∈ X ∞ γ \ { 0 } . It follows immediately that g ∗ ( y , 0 ) = max i =1 ,...,K g ∗ i ( y , 0 ) < 0 . (A4) . Fix an y y ∈ X ∞ γ \ { 0 } . F or eac h i , let W i = { w ∈ Ξ : g ∗ i ( y , w ) > 0 } . By h yp othesis, W i = ∅ . Since g ∗ ( y , w ) ≥ g ∗ i ( y , w ) for all i , we ha ve the inclusion S K i =1 W i ⊆ { w ∈ Ξ : g ∗ ( y , w ) > 0 } . This guaran tees that the latter set is nonempty . (A5) . Supp ose γ < 0 . By h yp othesis, each g i satisfies (A5), meaning g i ( 0 , z ) ≤ 0 for all z ∈ Ξ . Consequen tly , g ( 0 , z ) = max i =1 ,...,K g i ( 0 , z ) ≤ 0 for all z ∈ Ξ . 44 References [1] Shabbir Ahmed and Alexander Shapiro. Solving c hance-constrained sto c hastic programs via sampling and in teger programming. In State-of-the-Art De cision-Making T o ols in the Information-Intensive A ge , pages 261–269. INFORMS, 2008. [2] Y ounes Aoues and Alaa Chateauneuf. Benc hmark study of n umerical metho ds for reliabilit y- based design optimization. Structur al and Multidisciplinary Optimization , 41(2):277–294, 2010. [3] Raúl Ba jo-Buenestado. The Ib erian Peninsula Blac kout – Causes, Consequences, and Chal- lenges Ahead. https://doi.org/10.25613/EC9T- QJ89 , 2025. [4] Ja viera Barrera, Tito Homem-de Mello, Eduardo Moreno, Bernardo K Pagnoncelli, and Gi- anpiero Canessa. Chance-constrained problems and rare even ts: an imp ortance sampling ap- proac h. Mathematic al Pr o gr amming , 157:153–189, 2016. [5] P atrizia Beraldi, Maria Elena Bruni, and Domenico Conforti. Designing robust emergency medical service via stochastic programming. Eur op e an Journal of Op er ational R ese ar ch , 158 (1):183–193, 2004. [6] Daniel Biensto c k, Mic hael Chertko v, and Sean Harnett. Chance-constrained optimal pow er flo w: Risk-a w are netw ork con trol under uncertaint y . SIAM R eview , 56(3):461–495, 2014. [7] Nic holas H Bingham, Charles M Goldie, and Jef L T eugels. R e gular variation , v olume 27. Cam bridge Universit y Press, 1989. [8] Jose Blanc het, Jo ost Jorritsma, and Bert Zw art. Optimization under rare even ts: scaling la ws for linear c hance-constrained programs. arXiv pr eprint arXiv:2407.11825 , 2024. [9] Jose Blanchet, F an Zhang, and Bert Zw art. Efficient scenario generation for hea vy-tailed chance constrained optimization. Sto chastic Systems , 14(1):22–46, 2024. [10] Pierre Bonami and Miguel A Lejeune. An exact solution approac h for p ortfolio optimization problems under sto c hastic and in teger constraints. Op er ations R ese ar ch , 57(3):650–670, 2009. [11] Giusepp e Calafiore and Marco C Campi. Uncertain conv ex programs: randomized solutions and confidence lev els. Mathematic al Pr o gr amming , 102:25–46, 2005. [12] Giusepp e Carlo Calafiore. Random con vex programs. SIAM Journal on Optimization , 20(6): 3427–3464, 2010. [13] Giusepp e Carlo Calafiore and Marco C Campi. The scenario approac h to robust con trol design. IEEE T r ansactions on Automatic Contr ol , 51(5):742–753, 2006. [14] Marco C Campi and Simone Garatti. The exact feasibilit y of randomized solutions of uncertain con vex programs. SIAM Journal on Optimization , 19(3):1211–1230, 2008. [15] Marco C Campi and Simone Garatti. A sampling-and-discarding approac h to chance- constrained optimization: feasibility and optimalit y . Journal of Optimization The ory and Applic ations , 148(2):257–280, 2011. [16] Marco C Campi, Simone Garatti, and Maria Prandini. The scenario approach for systems and con trol design. A nnual R eviews in Contr ol , 33(2):149–157, 2009. 45 [17] Algo Carè, Simone Garatti, and Marco C Campi. F ast—fast algorithm for the scenario tech- nique. Op er ations R ese ar ch , 62(3):662–671, 2014. [18] Abraham Charnes, William W Co oper, and Gifford H Symonds. Cost horizons and certain ty equiv alents: an approach to sto c hastic programming of heating oil. Management Scienc e , 4(3): 235–263, 1958. [19] Mung Chiang, Chee W ei T an, Daniel P Palomar, Daniel O’neill, and Da vid Julian. Po wer con trol by geometric programming. IEEE T r ansactions on Wir eless Communic ations , 6(7): 2640–2651, 2017. [20] Jaeseok Choi, Anand Deo, Constan tino Lagoa, and Anirudh Subramany am. Reduced sam- ple complexit y in scenario-based con trol system design via constrain t scaling. IEEE Contr ol Systems L etters , 8:2793–2798, 2024. [21] Junho Chun, Junho Song, and Glaucio H P aulino. Structural top ology optimization under constrain ts on instan taneous failure probabilit y . Structur al and Multidisciplinary Optimization , 53(4):773–799, 2016. [22] Amir Dem b o and Ofer Zeitouni. L ar ge Deviations T e chniques and Applic ations , v olume 38. Springer Science & Business Media, 2009. [23] Anand Deo and Karthy ek Murthy . Ac hieving efficiency in black-box sim ulation of distribution tails with self-structuring imp ortance samplers. Op er ations R ese ar ch , 2023. [24] Anand Deo and Karth yek Murth y . The scaling behaviors in ac hieving high reliabilit y via c hance-constrained optimization. arXiv pr eprint arXiv:2504.07728 , 2025. [25] Will Duck ett. Risk analysis and the acceptable probability of failure. Structur al Engine er , 83 (15), 2005. [26] John HJ Einmahl, F an Y ang, and Chen Zhou. T esting the multiv ariate regular v ariation mo del. Journal of Business & Ec onomic Statistics , 39(4):907–919, 2021. [27] John HJ Einmahl, Andrea Kra jina, and Juan Juan Cai. Empirical likelihoo d based testing for m ultiv ariate regular v ariation. The Annals of Statistics , 53(1):352–373, 2025. [28] Özgün Elçi and Nilay Noy an. A chance-constrained t wo-stage sto c hastic programming mo del for h umanitarian relief netw ork design. T r ansp ortation R ese ar ch Part B: Metho dolo gic al , 108: 55–83, 2018. [29] Belleh F on tem. Robust c hance-constrained geometric programming with application to demand risk mitigation. Journal of Optimization The ory and Applic ations , 197(2):765–797, 2023. [30] Ab ebe Geletu, Armin Hoffmann, Mic hael Klopp el, and Pu Li. An inner-outer approximation approac h to c hance constrained optimization. SIAM Journal on Optimization , 27(3):1834–1857, 2017. [31] L Jeff Hong, Yi Y ang, and Liwei Zhang. Sequential con vex approximations to joint c hance constrained programs: A monte carlo approac h. Op er ations R ese ar ch , 59(3):617–630, 2011. [32] Dmitry Iv anov and Alexandre Dolgui. Viability of intert wined supply net works: extending the supply chain resilience angles tow ards surviv ability . a position paper motiv ated b y co vid-19 outbreak. International Journal of Pr o duction R ese ar ch , 58(10):2904–2915, 2020. 46 [33] Ashk an M Jasour, Necdet S A ybat, and Constantino M Lagoa. Semidefinite programming for c hance constrained optimization ov er semialgebraic sets. SIAM Journal on Optimization , 25 (3):1411–1440, 2015. [34] Constan tino M Lagoa, Xiang Li, and Mario Sznaier. Probabilistically constrained linear pro- grams and risk-adjusted con troller design. SIAM Journal on Optimization , 15(3):938–951, 2005. [35] Jean B Lasserre and Tillmann W eisser. Distributionally robust p olynomial chance-constrain ts under mixture am biguity sets. Mathematic al Pr o gr amming , 185(1):409–453, 2021. [36] Miguel A Lejeune and F rançois Margot. Solving c hance-constrained optimization problems with sto c hastic quadratic inequalities. Op er ations R ese ar ch , 64(4):939–957, 2016. [37] Jia Liu, Abdel Lisser, and Zhiping Chen. Distributionally robust c hance constrained geometric optimization. Mathematics of Op er ations R ese ar ch , 47(4):2950–2988, 2022. [38] James Luedtk e and Shabbir Ahmed. A sample approximation approac h for optimization with probabilistic constrain ts. SIAM Journal on Optimization , 19(2):674–699, 2008. [39] James Luedtk e, Shabbir Ahmed, and George L Nemhauser. An in teger programming approach for linear programs with probabilistic constraints. Mathematic al Pr o gr amming , 122(2):247–272, 2010. [40] Aleksander Luk ashevich, V y achesla v Gorchak ov, P etr V orobev, Deep jy oti Dek a, and Y ury Max- imo v. Imp ortance Sampling Approach to Chance-Constrained DC Optimal P ow er Flow. IEEE T r ansactions on Contr ol of Network Systems , 11(2):928–937, 2023. [41] Y annic k Malevergne and Didier Sornette. Extr eme financial risks: F r om dep endenc e to risk management . Springer, 2006. [42] Mic hael Milligan. Metho ds to mo del and calculate capacit y contributions of v ariable generation for resource adequacy planning (ivgtf1-2): A dditional discussion (presen tation). T echnical rep ort, National Renewable Energy Lab.(NREL), Golden, CO (United States), 2011. [43] Ark adi Nemirovski and Alexander Shapiro. Scenario approximations of chance constraints. In Giusepp e Calafiore and F abrizio Dabb ene, editors, Pr ob abilistic and R andomize d Metho ds for Design Under Unc ertainty , pages 3–47. Springer, 2006. [44] Ark adi Nemirovski and Alexander Shapiro. Con vex appro ximations of chance constrained programs. SIAM Journal on Optimization , 17(4):969–996, 2007. [45] Bernardo K P agnoncelli, Shabbir Ahmed, and Alexander Shapiro. Sample a verage appro x- imation method for c hance constrained programming: theory and applications. Journal of Optimization The ory and Applic ations , 142(2):399–416, 2009. [46] Alejandra Peña-Ordieres, James R Luedtke, and Andreas Wäc h ter. Solving c hance-constrained problems via a smooth sample-based nonlinear approximation. SIAM Journal on Optimization , 30(3):2221–2250, 2020. [47] F eng Qiu and Jianhui W ang. Chance-constrained transmission switc hing with guaran teed wind p o w er utilization. IEEE T r ansactions on Power Systems , 30(3):1270–1278, 2014. 47 [48] Sidney I Resnic k. He avy-tail phenomena: pr ob abilistic and statistic al mo deling , v olume 10. Springer Science & Business Media, 2007. [49] R Tyrrell Ro c k afellar and Roger J-B W ets. V ariational analysis , volume 317. Springer Science & Business Media, 2009. [50] Licio Romao, Antonis Papac hristo doulou, and K ostas Margellos. On the exact feasibilit y of con- v ex scenario programs with discarded constraints. IEEE T r ansactions on Automatic Contr ol , 68(4):1986–2001, 2022. [51] Reuv en Y Rubinstein. Optimization of computer sim ulation mo dels with rare ev ents. Eur op e an Journal of Op er ational R ese ar ch , 99(1):89–112, 1997. [52] Reuv en Y Rubinstein. Cross-en tropy and rare ev ents for maximal cut and partition problems. A CM T r ansactions on Mo deling and Computer Simulation (TOMACS) , 12(1):27–53, 2002. [53] Georg Sc hildbach, Lorenzo F agiano, Christoph F rei, and Manfred Morari. The scenario ap- proac h for sto chastic mo del predictive control with bounds on closed-loop constraint violations. A utomatic a , 50(12):3009–3018, 2014. [54] Alexander Shapiro, Darink a Dentc hev a, and Andrzej Ruszczynski. L e ctur es on sto chastic pr o- gr amming: mo deling and the ory . SIAM, 2021. [55] Anirudh Subramany am. Chance-Constrained Programming: Rare Ev ents. In Panos M Pardalos and Oleg A Prokop yev, editors, Encyclop e dia of Optimization . Springer International Publish- ing, 2023. [56] Rob erto T emp o, F abrizio Dabb ene, and Giusepp e Calafiore. R andomize d algorithms for anal- ysis and c ontr ol of unc ertain systems . Springer, 2005. [57] Shan yin T ong, Anirudh Subramany am, and Vishw as Rao. Optimization under rare c hance constrain ts. SIAM Journal on Optimization , 32(2):930–958, 2022. [58] Cees de V alk. Appro ximation of high quan tiles from in termediate quantiles. Extr emes , 19: 661–686, 2016. [59] M. Vidyasagar. L e arning and Gener alisation: With Applic ations to Neur al Networks . Commu- nications and Con trol Engineering. Springer, 2 edition, 2003. ISBN 9781849968676; 1849968675; 9781447137481; 1447137485. [60] Ximing W ang, Neng F an, and Panos M Pardalos. Robust chance-constrained supp ort vector mac hines with second-order momen t information. Annals of Op er ations R ese ar ch , 263:45–68, 2018. [61] Zhiyue W ang, Deli Gao, and Jianjun Liu. Multi-ob jectiv e sidetracking horizon tal well tra jec- tory optimization in cluster w ells based on ds algorithm. Journal of Petr oleum Scienc e and Engine ering , 147:771–778, 2016. [62] W eijun Xie and Shabbir Ahmed. Bicriteria approximation of chance-constrained co vering prob- lems. Op er ations R ese ar ch , 68(2):516–533, 2020. [63] Zey ang Zhang, Ch uanhou Gao, and James Luedtke. New v alid inequalities and form ulations for the static joint chance-constrained lot-sizing problem. Mathematic al Pr o gr amming , 199(1): 639–669, 2023. 48 [64] Alexander Zimper. The minimal confidence lev els of basel capital regulation. Journal of Banking R e gulation , 15(2):129–143, 2014. 49
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment