A property of log-concave and weakly-symmetric distributions for two step approximations of random variables

In this paper we introduce a generalization of classical risk measures in which the risk is represented by a step function taking two values, corresponding to two endogenously determined market regimes. This extends the traditional framework where ri…

Authors: Mihaela-Adriana Nistor, Ionel Popescu

A PROPER TY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES MIHAELA-ADRIANA NISTOR AND IONEL POPESCU A B S T R A C T . In this paper we introduce a generalization of classical risk measur es in which the risk is rep- resented by a step function taking two values, corresponding to two endogenously determined market regimes. This extends the traditional framework where risk measures map random variables to single real numbers. For the quadratic loss function, we study the optimization problem of determining the optimal regime threshold and corresponding values. In the case of log-concave distributions we give conditions for the uniqueness of the r egime changing. W e treat the case of one dimension and also of multi-dimensions for elliptic distributions. W e demonstrate the necessity of convexity through counterexamples. Keywords: Risk measur es, regime-switching, log-concave distributions, convex optimization, integral inequalities MSC Classification: 91G70, 60E15, 26D15, 49K30, 90C25 1. I N T R O D U C T I O N Risk measurement is a central topic in probability , optimization, and mathematical finance, where the loss of a financial position is modeled by a real-valued random variable and quantified via a r eal- valued functional. A foundational axiomatic framework for such functionals was introduced by Artzner et al. [ 6 ] through the notion of coherent risk measures, characterized by monotonicity , translation in- variance, positive homogeneity , and subadditivity . Subsequent work relaxed positive homogeneity and led to convex risk measures (see, e.g., [ 11 , 13 ]), establishing deep connections with convex analysis and optimization (cf. [ 12 , 21 ]). For broader discussions and comparisons of risk measures and for further perspectives (including dynamic aspects and statistical considerations), see e.g. [ 18 , 9 , 24 , 8 , 5 , 14 , 20 ]. Classical risk measur es such as V alue-at-Risk (V aR) and Expected Shortfall (ES) compr ess the distri- bution of losses into a single number [ 2 ]. While such scalar summaries are useful in practice, they do not directly reflect the pr esence of differ ent market states (e.g., calm versus stressed r egimes) that are com- monly modeled via regime-switching dynamics [ 15 , 4 , 16 , 17 ] and motivate state-dependent or dynamic risk measures [ 1 ]. In this paper we propose and analyze a refinement of the classical setup in which risk is repr esented not by a single scalar but by a simple function with finitely many regimes. Concretely , given a loss function G : R → [0 , ∞ ) and a class of measurable functions e R , we consider the general approximation problem (1.1) f ∗ = argmin f ∈ e R E [ G ( X − f ( X ))] . The class e R we consider here consists of two-r egime (two-step) functions, f ( x ) = α 1 ( −∞ ,t ] ( x ) + β 1 ( t, ∞ ) ( x ) , where the regime levels α , β ∈ R and the threshold t ∈ R are chosen optimally and depend on the law of X . Our primary focus is on the loss G ( x ) = x 2 and the problem of finding an optimal two-step approxi- mation of X in an L 2 sense. This formulation is closely related to optimal partitioning and quantization, where one seeks a par- tition and repr esentative values minimizing an expected distortion; a central issue there is uniqueness of the optimal partition (or of the corresponding Lloyd fixed point), which can fail without structural assumptions. For log-concave (equivalently ILR) densities, uniqueness and convergence properties are 1 2 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU well understood for broad classes of convex losses; see T rushkin [ 22 , 23 ] and the statistical treatment of unique optimal partitions by Mease and Nair [ 19 ] (see also Eubank’s survey [ 10 ]). In the present paper we focus on the two-regime case and obtain a more explicit, quantitative description: for the quadratic loss the optimization reduces to maximizing an explicit one-parameter functional t 7→ f X ( t ) , and under log-concavity we give conditions ensuring a unique maximizer; in the weakly-symmetric log-concave case we further identify this maximizer as the common mean/median. Our analysis focuses on log-concave distributions, a class that plays a fundamental role in probability and analysis, and enjoys strong monotonicity and stability properties (see [ 3 , 7 ]). In this paper we take a slightly different perspective: we focus on the two-regime case, characterize when the optimal threshold is unique, and identify cases where it is also explicit (namely , the common mean/median) for weakly- symmetric log-concave laws. Description of the main results. • One dimension (Sections 2 – 2.2 ). W e start from the general approximation problem ( 2.1 ) and show that, for convex losses, once the regime levels α, β are fixed, the optimal regime set is a half-line (Proposition 1 ). For the quadratic loss this reduces the optimization to thresholds t ∈ R and yields the one-parameter criterion f X ( t ) := E [ X , X ≤ t ] 2 P ( X ≤ t ) + E [ X , X > t ] 2 P ( X > t ) (see ( 2.5 ) and ( 2.7 )). Theorem 1 then identifies conditions ensuring that f X has a unique maximizer and, in the weakly-symmetric log-concave case, proves that this maximizer is exactly the com- mon mean/median t = µ . The treatment here is rather elementary and completely self-contained. • Higher dimensions (Section 3 ). W e extend the framework to X ∈ R d and show that, for the qua- dratic loss, the optimal regime boundary is an affine hyperplane, so it is natural to optimize over halfspaces (Proposition 2 and ( 3.8 )). For centered elliptical laws with log-concave one- dimensional projections, the halfspace optimization r educes to the one-dimensional problem along projections, and the optimal dir ection is characterized via a Rayleigh quotient (Theorem 2 ). • Sharpness and counter examples (Section 3.2 and Section 3.3 ). W e give explicit examples showing that without log-concavity/convexity the function f X can have multiple global maximizers, and in dimension d ≥ 2 the halfspace functional need not be maximized at the centered cut even when E [ X ] = 0 . 2. T H E O N E D I M E N S I O N A L C A S E In this section we introduce the main concept in the one dimensional case. 2.1. Problem setup. Let G : R → [0 , ∞ ) be a loss function and consider (2.1) f ∗ = argmin f ∈ e R E [ G ( X − f ( X ))] . where e R is a class of funtions. Our main model in our case is the one in which we have f = 1 A for some Borel measurable A on the real line. The first result is about the structure of the set A which in general under convexity assumptions on G must be essentially a half line. Proposition 1. Assume that G : R → R is convex. Then for any fixed α, β ∈ R , a minimizer of A 7− → E  G ( X − α ) 1 A ( X ) + G ( X − β ) 1 A c ( X )  over Borel sets A ⊂ R is given (up to P -null sets) by A α,β := { x ∈ R : G ( x − α ) ≤ G ( x − β ) } . Moreover , A α,β is an interval (possibly empty or all of R ). In particular , unless α = β , A α,β is a half-line. A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 3 Proof. Step 1: pointwise minimization. Fix α, β and write the objective as E  G ( X − β )  + E  ( G ( X − α ) − G ( X − β )) 1 A ( X )  . Thus one minimizes by taking 1 A ( x ) = 1 exactly when G ( x − α ) − G ( x − β ) ≤ 0 , i.e. by choosing A = A α,β . Step 2: half-line structure under convexity. Assume α < β and set δ := β − α > 0 . For u ∈ R define H ( u ) := G ( u + δ ) − G ( u ) . By convexity of G , the map H is nondecr easing. Hence { u : H ( u ) ≤ 0 } is an interval of the form ( −∞ , u 0 ] (or empty/all), and translating back shows that A α,β is a half-line. The case α > β is analogous, and if α = β then every A is optimal. □ This result justifies the fact that we can restrict our optimization to the sets of the form A = ( −∞ , c ] and thus the class of functions is given by f ( x ) = α 1 ( −∞ ,c ] ( x ) + β 1 ( c, ∞ ) ( x ) . In the rest of the paper we r estrict to the quadratic loss G ( x ) = x 2 , so the optimization becomes (2.2) argmin α,β ,c ∈ R E [( X − α 1 ( −∞ ,c ] ( X ) − β 1 ( c, ∞ ) ( X )) 2 ] . Before we move on, we will use the following convention. Convention. For any event A , we write E [ X , A ] := E  X 1 A  . For fixed c (equivalently , fixed A ), the optimal values are the conditional means. Indeed, setting the partial derivatives with respect to α and β to zero yields (2.3) α = E [ X 1 ( −∞ ,c ] ( X )] P ( X ≤ c ) = E [ X | X ≤ c ] , (2.4) β = E [ X 1 ( c, ∞ ) ( X )] P ( X > c ) = E [ X | X > c ] . Plugging these back gives the reduced objective h ( c ) = E [ X 2 ] − E [ X , X ≤ c ] 2 P ( X ≤ c ) − E [ X , X > c ] 2 P ( X > c ) . Thus minimizing ( 2.2 ) is equivalent to maximizing (2.5) argmax c  E [ X , X ≤ c ] 2 P ( X ≤ c ) + E [ X , X > c ] 2 P ( X > c )  . 2.2. Continuous and log-concave distributions in one dimension. In this section, we treat a very spe- cial case of distribution for which the structure of the optimization problem ( 2.1 ) has unique solutions and in some particular cases is just the mean. The model we tr eat here is the case of random variables X which have a log-concave density and are weakly-symmetric. Definition 1. By definition, we call a density ϕ : R → R log-concave if log( ϕ ( x )) is concave. This translates into ϕ ( x ) = e − V ( x ) where V : R → R is a convex function. We say that the density ϕ is weakly-symmetric if the mean of ϕ is also a median. In other words, if µ = R R xϕ ( x ) dx , then R ( −∞ ,µ ] ϕ ( x ) dx = 1 / 2 . 4 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU 2.3. Examples. If we assume that ϕ ( x ) = 1 Z V e − V ( x ) with Z V = R e − V ( x ) dx , the normalizing constant, one such example of weakly-symmetric density is the case of symmetric density about 0 , i.e. (2.6) V ( x ) = V ( − x ) , ∀ x ∈ R In addition, if V is also convex this is an example of weakly and log-concave density . Particular examples of weakly-symmetric are n U ( a, b ) : V ( x ) = log( b − a ) for x ∈ [ a, b ] , and ∞ , otherwise N (0 , 1) : V ( x ) = ( x − µ ) 2 2 σ 2 E xp 2 (1) : V ( x ) = | x − µ | The above examples, are cases of symmetry around some real number µ , i.e. they satisfy V ( x + µ ) = V ( − x + µ ) for all x ∈ R . In this case, it is obvious that the median is also the mean. However , there are log-concave distributions which are not symmetric. A typical example is the W eibull distribution with cumulative function given by k ≥ 0 F ( x ) = e − x k for x ≥ 0 and for a very specific k we can show that the mean can be chosen to be equal to the median. The point is that the mean of this distribution is Γ(1 + 1 /k ) and the median is (ln 2) 1 /k . One can show that the mean is equal to the variance for a certain value of k , because for k = 1 , we have Γ(1 + 1) = 1 > ln(2) and for k → ∞ , we have Γ(1 + 1 /k ) ≈ 1 − γ /k ≈ 1 − 0 . 577 /k , where γ ≈ − 0 . 577 is Euler ’s constant, and (ln(2)) 1 /k ≈ 1 − ln(ln(2)) /k ≈ 1 − 0 . 367 /k which then means that for lar ge k , Γ(1 + 1 /k ) < (ln(2)) 1 /k . Numerical approximation gives that k ≈ 3 . 439 . 2.4. Settings and the result. W e consider the probability measur e µ ( dx ) = 1 Z V e − V ( x ) , where Z V = Z R e − V ( x ) dx. and we consider the variable X having the distribution µ . W ithout loss of generality we assume that Z V = 1 , so ∞ R −∞ e − V ( x ) dx = 1 . 2.5. Regularity assumptions. In Section 2.2 we assume that X admits a density of the form ϕ ( x ) = e − V ( x ) on R with V convex (so ϕ is log-concave) and R R ϕ ( x ) dx = 1 . W e also assume E [ X 2 ] < ∞ (hence E [ | X | ] < ∞ ), so all quantities in ( 2.7 ) are finite. In the proofs below we differ entiate functions of the form t 7→ R t −∞ h ( x ) ϕ ( x ) dx and t 7→ R ∞ t h ( x ) ϕ ( x ) dx . Since a log-concave density is locally bounded and locally integrable on the interior of its support, these functions are absolutely continuous and their derivatives are given by h ( t ) ϕ ( t ) for all t in the interior of the support. When P ( X ≤ t ) ∈ { 0 , 1 } , the expression ( 2.7 ) is interpreted by continuity and such t are irrelevant for the maximization pr oblem. The main result is the following. Theorem 1. For a random variable X of mean µ , set (2.7) f X ( t ) = E [ X , X ≤ t ] 2 P ( X ≤ t ) + E [ X , X > t ] 2 P ( X > t ) . (1) If X has a log-concave distribution, then f X has a unique maximizer . (2) If µ is a local maximum for f X , then X must be weakly-symmetric. (3) Assume that X is a weakly-symmetric and log-concave density with mean µ . Then f X is decreasing on the interval [ µ, ∞ ) and incr easing on ( −∞ , µ ] . In particular , f ( t ) < f ( µ ) for t  = µ , thus the unique optimizer of ( 2.2 ) is c = µ and the minima obtained is β ( − 1 ( −∞ ,µ ] ( X ) + 1 ( µ, ∞ ) ( X )) with β = E [ | X − µ | ] / 2 . A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 5 2.6. Some integral inequalities for convex functions. Before we provide the proof of Theor em 1 we introduce the key result here which is of independent interest. Lemma 1. Let V : [0 , ∞ ) → R be convex and assume (2.8) Z ∞ 0 e − V ( y ) dy < ∞ , Z ∞ 0 y e − V ( y ) dy < ∞ . Then (2.9) e − V (0) Z ∞ 0 y e − V ( y ) dy ≤  Z ∞ 0 e − V ( y ) dy  2 . Equality is attained for V ( y ) = V (0) + λy with λ > 0 . Proof. Set, for convenience, w ( y ) := e − V ( y ) , y ≥ 0 . Since V is convex, it is locally Lipschitz on (0 , ∞ ) and hence absolutely continuous on every interval [0 , R ] . In particular , the right derivative v ( y ) := V ′ + ( y ) exists for every y ≥ 0 , is nondecreasing, locally integrable, and satisfies V ( y ) = V (0) + Z y 0 v ( s ) ds, y ≥ 0 . Consequently w is absolutely continuous on each [0 , R ] , and for a.e. y , w ′ ( y ) = − v ( y ) w ( y ) . Boundary terms needed for integration by parts. From R ∞ 0 w < ∞ we have V ( y ) → ∞ as y → ∞ (otherwise w would not be integrable), so w ( y ) → 0 . Moreover , convexity implies that v is nondecr easing; hence ther e exists R 0 such that v ( y ) ≥ 0 for all y ≥ R 0 , so V is nondecreasing on [ R 0 , ∞ ) and thus w is nonincreasing there. For R ≥ 2 R 0 we then have Z R R/ 2 w ( y ) dy ≥ R 2 w ( R ) , hence R w ( R ) ≤ 2 Z ∞ R/ 2 w ( y ) dy − − − − → R →∞ 0 . This justifies the vanishing of the boundary term R w ( R ) in the integration by parts identity below . T wo integration-by-parts identities. For any R > 0 , using w ′ = − v w a.e. and absolute continuity , Z R 0 v ( y ) w ( y ) dy = Z R 0 − w ′ ( y ) dy = w (0) − w ( R ) . Letting R → ∞ and using w ( R ) → 0 gives (2.10) Z ∞ 0 V ′ + ( y ) e − V ( y ) dy = e − V (0) . Similarly , Z R 0 w ( y ) dy = h y w ( y ) i R 0 − Z R 0 y w ′ ( y ) dy = R w ( R ) + Z R 0 y v ( y ) w ( y ) dy . Letting R → ∞ and using R w ( R ) → 0 yields (2.11) Z ∞ 0 e − V ( y ) dy = Z ∞ 0 y V ′ + ( y ) e − V ( y ) dy . Proof of ( 2.9 ) . Denote I := Z ∞ 0 e − V ( y ) dy , J := Z ∞ 0 y e − V ( y ) dy . 6 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU Using ( 2.10 ) and ( 2.11 ), ( 2.9 ) is equivalent to  Z ∞ 0 y e − V ( y ) dy  Z ∞ 0 V ′ + ( y ) e − V ( y ) dy  ≤  Z ∞ 0 e − V ( y ) dy  Z ∞ 0 y V ′ + ( y ) e − V ( y ) dy  . Expand both sides as double integrals and symmetrize:  Z ∞ 0 e − V  Z ∞ 0 y V ′ + ( y ) e − V ( y ) dy  −  Z ∞ 0 y e − V  Z ∞ 0 V ′ + ( y ) e − V ( y ) dy  = 1 2 Z ∞ 0 Z ∞ 0 ( y 1 − y 2 )  V ′ + ( y 1 ) − V ′ + ( y 2 )  e − V ( y 1 ) e − V ( y 2 ) dy 1 dy 2 . Since V ′ + is nondecreasing, ( y 1 − y 2 )( V ′ + ( y 1 ) − V ′ + ( y 2 )) ≥ 0 for all y 1 , y 2 , hence ( 2.9 ) follows. Equality in ( 2.9 ) forces V ′ + to be a.e. constant, i.e. V affine: V ( y ) = V (0) + λy ; integrability implies λ > 0 . □ 2.7. The proof of Theorem 1 . Proof. W e structur e the proof accor ding to the three statements in the theorem. W e first reduce the pr oof to the case µ = 0 . Let Y := X − µ . A direct check shows that for all t ∈ R , f X ( t ) = f Y ( t − µ ) + µ 2 . In particular , arg max t ∈ R f X ( t ) = µ + arg max s ∈ R f Y ( s ) . Therefor e it is enough to prove the theorem for center ed variables. In the rest of the pr oof we assume (2.12) E [ X ] = 0 . and then we can write (2.13) f X ( t ) = E [ X , X ≤ t ] 2 P ( X ≤ t ) P ( X > t ) . (1) (Uniqueness of the maximizer under convexity of V .) Assume that V is convex (equivalently , X has a log-concave density e − V ) and E [ X ] = 0 . W rite f X ( t ) in the centered form ( 2.13 ) and set g ( t ) := log f X ( t ) . Differentiating gives, for all t in the interior of the support, g ′ ( t ) = e − V ( t )  − 2 t R ∞ t xe − V ( x ) dx + 1 R ∞ t e − V ( x ) dx − 1 R t −∞ e − V ( x ) dx  = e − V ( t ) R ∞ t xe − V ( x ) dx  − 2 t + R ∞ t xe − V ( x ) dx R ∞ t e − V ( x ) dx − R ∞ t xe − V ( x ) dx R t −∞ e − V ( x ) dx  . (2.14) W e arrange this expression into (2.15) g ′ ( t ) = e − V ( t ) R ∞ t xe − V ( x ) dx R ∞ t ( x − t ) e − V ( x ) dx R ∞ t e − V ( x ) dx − R t −∞ ( t − x ) e − V ( x ) dx R t −∞ e − V ( x ) dx ! . Indeed, using E [ X ] = 0 , we have R t −∞ xe − V ( x ) dx = − R ∞ t xe − V ( x ) dx , hence R ∞ t xe − V ( x ) dx R ∞ t e − V ( x ) dx − R t −∞ xe − V ( x ) dx R t −∞ e − V ( x ) dx − 2 t = R ∞ t ( x − t ) e − V ( x ) dx R ∞ t e − V ( x ) dx − R t −∞ ( t − x ) e − V ( x ) dx R t −∞ e − V ( x ) dx which proves ( 2.15 ). Next define m ( t ) := R ∞ t ( x − t ) e − V ( x ) dx R ∞ t e − V ( x ) dx , k ( t ) := R t −∞ ( t − x ) e − V ( x ) dx R t −∞ e − V ( x ) dx . Thus g ′ ( t ) = e − V ( t ) R ∞ t xe − V ( x ) dx ( m ( t ) − k ( t )) . A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 7 In particular , g ′ ( t ) = 0 if and only if m ( t ) = k ( t ) . Moreover , since E [ X ] = 0 we have Z ∞ t xe − V ( x ) dx = − Z t −∞ xe − V ( x ) dx, so the denominator is strictly positive for every t in the interior of the support (for t > 0 the first integral is positive, while for t < 0 the second is positive). Consequently the sign of g ′ is determined by m ( t ) − k ( t ) . Next we analyze separately these functions. Step 1: m is decreasing. A direct computation shows that m ′ ( t ) = e − V ( t ) R ∞ t ( x − t ) e − V ( x ) dx −  R ∞ t e − V ( x ) dx  2  R ∞ t e − V ( x ) dx  2 Set V t ( u ) := V ( t + u ) − V ( t ) , which is convex on [0 , ∞ ) . Applying ( 2.9 ) fr om Lemma 1 to V t gives Z ∞ 0 ue − V t ( u ) du ≤  Z ∞ 0 e − V t ( u ) du  2 . Rewriting back in the original variables yields e − V ( t ) Z ∞ t ( x − t ) e − V ( x ) dx ≤  Z ∞ t e − V ( x ) dx  2 , which is exactly m ′ ( t ) ≤ 0 , so m is decreasing. Step 2: k is increasing. W e can work out the same argument as for Step 1. Alternatively , apply Step 1 to the log-concave random variable − X (whose potential is y 7→ V ( − y ) , still convex). The corresponding m function for − X at level − t is exactly k ( t ) , hence k is increasing. Step 3: uniqueness of the zero of g ′ . Since m is decreasing and k is increasing, m ( t ) − k ( t ) is strictly decreasing. Moreover , lim t →−∞ m ( t ) = + ∞ , lim t →−∞ k ( t ) = 0 , lim t → + ∞ m ( t ) = 0 , lim t → + ∞ k ( t ) = + ∞ , so m ( t ) − k ( t ) crosses 0 exactly once. Therefor e g ′ ( t ) has a unique zero, hence g has a unique critical point; since f X ( t ) → 0 as t → ±∞ , this critical point is the unique maximizer of f X . (2) (The mean is a maximizer ⇒ weak symmetry .) Assume 0 is a maximizer of f X . Then it is a critical point of g ( t ) = log f X ( t ) , hence g ′ (0) = 0 . Using ( 2.14 ) we get the conclusion that Z ∞ 0 e − V ( x ) dx = Z 0 −∞ e − V ( x ) dx. That is, 0 is a median. Since E [ X ] = 0 , this is exactly the weak symmetry condition. (3) (Monotonicity under weak symmetry and log-concavity.) Assume now that X is weakly symmetric and log-concave with mean 0 . Then 0 is a median, i.e. (2.16) Z 0 −∞ e − V ( x ) dx = Z ∞ 0 e − V ( x ) dx. which combined with the expresion of ( 2.14 ) shows that g ′ (0) = 0 and also m (0) = k (0) . Since we just proved that g ′ has the same sign as m ( t ) − k ( t ) which is decreasing, we get that g ′ ( t ) < 0 for t > 0 and g ′ ( t ) > 0 for t < 0 which definitely show that g attains the maximum at t = 0 . □ 8 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU 3. T H E M U LT I D I M E N S I O N A L C A S E In this section we extend the one–dimensional framework of Section 2 to random vectors X ∈ R d . This amounts to constructing risk strategies in the case of portfolios of risky assets. The guiding idea is the same: instead of approximating X by a single constant (as in classical risk measures), we allow two regimes determined intrinsically by a measurable set A ⊂ R d , and we approximate X by two (vector) constants on A and on A c . 3.1. Problem definition and classes of regimes. Let G : R d → [0 , ∞ ) be a given loss function and let e R be a class of measurable functions f : R d → R d . W e consider the optimization problem (3.1) f ∗ = argmin f ∈ e R E  G ( X − f ( X ))  . In the classical definition of risk measures, e R would be the set of constant functions. Here we focus on a two–regime class in which f takes only two vector values depending on membership in a set A . Standing integrability assumptions. For the quadratic loss G ( x ) = ∥ x ∥ 2 considered below we assume E ∥ X ∥ 2 < ∞ (hence E ∥ X ∥ < ∞ ), so that all conditional means in ( 3.5 ) exist and the objectives in ( 3.6 )– ( 3.7 ) ar e finite. When writing ratios such as ∥ E [ X 1 A ] ∥ 2 P ( A ) we implicitly r estrict to sets with 0 < P ( A ) < 1 ; the cases P ( A ) ∈ { 0 , 1 } are trivial and can be ignor ed in the maximization problem. Standing integrability assumptions. For the quadratic loss G ( x ) = ∥ x ∥ 2 considered below we assume E ∥ X ∥ 2 < ∞ (hence E ∥ X ∥ < ∞ ), so that all conditional means in ( 3.5 ) exist and the objectives in ( 3.6 )– ( 3.7 ) ar e finite. When writing ratios such as ∥ E [ X 1 A ] ∥ 2 P ( A ) we implicitly r estrict to sets with 0 < P ( A ) < 1 ; the cases P ( A ) ∈ { 0 , 1 } are trivial and can be ignor ed in the maximization problem. T wo–regime class. For a family of measurable sets A ⊂ B ( R d ) define (3.2) R ( d ) A = n f : R d → R d : f ( x ) = α 1 A ( x ) + β 1 A c ( x ) , A ∈ A , α, β ∈ R d o . Proposition 2. Assume that G : R d → R is convex. Fix α, β ∈ R d . Then a minimizer of A 7− → E  G ( X − α ) 1 A ( X ) + G ( X − β ) 1 A c ( X )  over Borel sets A ⊂ R d is given (up to P -null sets) by A α,β := { x ∈ R d : G ( x − α ) ≤ G ( x − β ) } . Moreover , setting δ := β − α , the set A α,β has the following directional one-crossing property: for every z ∈ R d , the intersection of A α,β with the affine line z + R δ is either empty , all of z + R δ , or a half-line in that line. In particular , for the quadratic loss G ( x ) = ∥ x ∥ 2 one has A α,β = n x ∈ R d : ⟨ x, β − α ⟩ ≤ 1 2  ∥ β ∥ 2 − ∥ α ∥ 2  o , which is an affine halfspace. Proof. Pointwise minimization (dimension-free). Fix α, β ∈ R d and minimize over Borel sets A ⊂ R d . W rite E  G ( X − α ) 1 A ( X ) + G ( X − β ) 1 A c ( X )  = E  G ( X − β )  + E  ( G ( X − α ) − G ( X − β )) 1 A ( X )  . Thus, exactly as in one dimension, we minimize by choosing 1 A ( x ) = 1 whenever G ( x − α ) − G ( x − β ) ≤ 0 , i.e. by taking A = A α,β . Halfspace structur e for G ( x ) = ∥ x ∥ 2 . In one dimension, convexity of G implies a monotonicity pr operty of u 7→ G ( u + δ ) − G ( u ) , which forces A α,β to be a half-line. In dimension d ≥ 2 there is no analogous total order , so for a general convex G the set A α,β = { G ( · − α ) ≤ G ( · − β ) } need not be a halfspace. For the quadratic loss, however , ∥ x − α ∥ 2 ≤ ∥ x − β ∥ 2 ⇐ ⇒ 2 ⟨ x, β − α ⟩ ≤ ∥ β ∥ 2 − ∥ α ∥ 2 , which is precisely the af fine halfspace stated in the proposition. □ A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 9 Example 1. In general, for a convex G the set A α,β need not be a halfspace. For instance, in R 2 let G ( x, y ) = ( x − y ) 2 + x 4 , α = (0 , 0) , β = (1 , 0) . Then A α,β = { ( x, y ) : G ( x, y ) ≤ G ( x − 1 , y ) } is given by ( x − y ) 2 + x 4 ≤ ( x − 1 − y ) 2 + ( x − 1) 4 ⇐ ⇒ y ≥ 2 x 3 − 3 x 2 + 3 x − 1 . Thus A α,β is the epigraph of a cubic curve, hence it is not an affine halfspace. W e will work throughout with the quadratic loss (3.3) G ( x ) = ∥ x ∥ 2 , x ∈ R d , so that ( 3.1 ) becomes (3.4) argmin α,β ∈ R d , A ∈A E h   X − α 1 A ( X ) − β 1 A c ( X )   2 i . Optimizing first in α and β . Define h ( α, β , A ) := E h   X − α 1 A ( X ) − β 1 A c ( X )   2 i , ( α, β ) ∈ ( R d ) 2 , A ∈ A . The map h is differentiable in α and β , and setting the gradients to zero yields (3.5) α = E [ X 1 A ( X )] P ( A ) = E [ X | X ∈ A ] , β = E [ X 1 A c ( X )] P ( A c ) = E [ X | X / ∈ A ] . Plugging ( 3.5 ) back into h gives, after elementary rearrangements, (3.6) h ( A ) = E ∥ X ∥ 2 −   E [ X 1 A ( X )]   2 P ( A ) −   E [ X 1 A c ( X )]   2 P ( A c ) . Therefor e the minimization problem ( 3.4 ) is equivalent to the maximization problem (3.7) argmax A ∈A   E [ X 1 A ( X )]   2 P ( A ) +   E [ X 1 A c ( X )]   2 P ( A c ) ! . Halfspaces as the natural class. Fix the mean µ := E [ X ] . In view of Proposition 2 , for the quadratic loss the regime boundary is an affine hyperplane, hence it is natural to restrict the maximization in ( 3.7 ) to halfspaces. W e therefore consider the mean-center ed halfspace class (3.8) H µ := n A u,t : u ∈ S d − 1 , t ∈ R o , A u,t := { x ∈ R d : ⟨ x − µ, u ⟩ ≤ t } . Remark 1. The uncentered halfspace { x : ⟨ x, u ⟩ ≤ τ } is the same as A u,t with τ = ⟨ µ, u ⟩ + t . Thus centering at µ is only a re-parameterization. If the sets A are of the form A = A u,t we define the objective (3.9) F ( u, t ) :=   E [ X 1 A u,t ( X )]   2 P ( A u,t ) +   E [ X 1 A c u,t ( X )]   2 P ( A c u,t ) , ∥ u ∥ = 1 , t ∈ R . In contrast with the one–dimensional case, even for weakly symmetric distributions it is not automatic that the maximizing halfspace is determined solely by the mean. The next subsections discuss structural conditions (e.g. ellipticity) under which the optimizer can nevertheless be characterized. The optimization problem in the multidimensional case is much more delicate and we will consider here only a special class of random variables for which we can argue about the identification of the maximum point as in the case of one dimensional case. Before we introduce some results we define the class of densities for which we can really prove some positive results about the identification of the mean as the optimal point for the separation space. 10 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU Definition 2 (Elliptical Distribution) . A random vector X ∈ R d is said to have a centered elliptical distribution with positive-definite scatter matrix Σ ∈ R d × d if its probability density function (PDF) exists and is of the form f X ( x ) = C d det(Σ) − 1 / 2 g  x T Σ − 1 x  , where g : [0 , ∞ ) → [0 , ∞ ) is a non-negative density generator function satisfying R ∞ 0 s d/ 2 − 1 g ( s ) ds < ∞ , and C d is the normalization constant. A convenient sufficient condition for log-concavity is that g is log-concave and nonincreasing on [0 , ∞ ) . Equivalently , one may write g ( s ) = e − φ ( s ) with φ : [0 , ∞ ) → R convex and nondecreasing, so that f X ( x ) ∝ exp  − φ  ( x − µ ) T Σ − 1 ( x − µ )  is log-concave. Examples are densities of the form exp( −∥ Σ − 1 / 2 ( x − µ ) ∥ p ) with p ≥ 1 which include the Gaussians as particular cases. Lemma 2 (Linearity of Conditional Expectation for Elliptical Distributions) . Let X ∈ R d be a centered random vector with an elliptical distribution and scatter matrix Σ . For any two linearly independent vectors u, v ∈ R d , the conditional expectation of the projection ⟨ X, u ⟩ given ⟨ X , v ⟩ = t is linear in t : E [ ⟨ X , u ⟩ | ⟨ X , v ⟩ = t ] = ρ u,v t, where ρ u,v = u T Σ v v T Σ v . Consequently , the function ϕ u,v ( t ) = E [ ⟨ X, u ⟩ | ⟨ X, v ⟩ = t ] satisfies ϕ (0) = 0 and the ratio ϕ ( t ) /t is constant (and thus non-increasing) on (0 , ∞ ) . Proof. Consider the bivariate random vector Y = ( Y 1 , Y 2 ) T = ( ⟨ X , v ⟩ , ⟨ X , u ⟩ ) T . Since the class of elliptical distributions is closed under linear transformations, Y follows a centered bivariate elliptical distribution with scatter matrix Σ Y =  v T Σ v v T Σ u u T Σ v u T Σ u  =  σ 11 σ 12 σ 21 σ 22  . The conditional density of Y 2 given Y 1 = t is proportional to the slice of the joint density: f Y 2 | Y 1 ( y 2 | t ) ∝ g   t y 2  T Σ − 1 Y  t y 2   . Using the formula for the inverse of a block matrix, the quadratic form Q ( y 2 ) = ( t, y 2 )Σ − 1 Y ( t, y 2 ) T can be rewritten by completing the squar e with respect to y 2 : Q ( y 2 ) = c 1  y 2 − σ 12 σ 11 t  2 + c 2 ( t ) , where c 1 = (det Σ Y ) − 1 σ 11 > 0 and c 2 ( t ) is independent of y 2 . The conditional density is therefor e of the form h (( y 2 − µ ( t )) 2 ) , where µ ( t ) = σ 12 σ 11 t . Since this density is symmetric about µ ( t ) , the conditional expectation corresponds to the center of symmetry: E [ Y 2 | Y 1 = t ] = σ 12 σ 11 t = u T Σ v v T Σ v t. This confirms the linearity of the regr ession function ϕ u,v ( t ) . □ Definition 3 (W eak symmetry) . A random vector X ∈ R d with mean µ is called weakly symmetric if for every halfspace H whose boundary hyperplane contains µ , one has P ( X ∈ H ) = 1 / 2 . Equivalently , for every unit u ∈ S d − 1 , P ( ⟨ X − µ, u ⟩ ≤ 0) = 1 2 . Theorem 2. Let X ∈ R d have an elliptical density with mean µ and positive definite scatter matrix Σ . Assume moreover that for every unit u ∈ S d − 1 , the one-dimensional projection Z u := ⟨ X − µ, u ⟩ has a weakly symmetric log-concave density on R (e.g. this holds if X has a log-concave density). Then: (i) For every fixed unit u , the map t 7→ F ( u, t ) is maximized at t = 0 . (ii) The map u 7→ F ( u, 0) is maximized when u is an eigenvector of Σ corresponding to the largest eigenvalue λ max (Σ) . A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 11 Proof. Step 0: reduction to the centered case. Let Y := X − µ . For A u,t = {⟨ Y , u ⟩ ≤ t } we have E [ X 1 A u,t ] = µ P ( A u,t ) + E [ Y 1 A u,t ] , E [ X 1 A c u,t ] = µ P ( A c u,t ) + E [ Y 1 A c u,t ] . Since E [ Y ] = 0 , we have E [ Y 1 A c u,t ] = − E [ Y 1 A u,t ] , and a direct expansion shows (3.10) F ( u, t ) = ∥ µ ∥ 2 + e F ( u, t ) , where e F ( u, t ) :=   E [ Y 1 {⟨ Y ,u ⟩≤ t } ]   2 P ( ⟨ Y , u ⟩ ≤ t ) +   E [ Y 1 {⟨ Y ,u ⟩ >t } ]   2 P ( ⟨ Y , u ⟩ > t ) . Hence maximizing F ( u, t ) over t is equivalent to maximizing e F ( u, t ) , so from now on we assume µ = 0 . Step 1: reduce e F ( u, t ) to a one-dimensional functional. Fix unit u and set Z := ⟨ Y , u ⟩ . For centered ellip- tical Y with scatter matrix Σ , the linear r egression property gives (take v = u in the standar d bivariate elliptical regr ession formula) (3.11) E [ Y | Z = s ] = Σ u u T Σ u s. Therefor e, E  Y 1 { Z ≤ t }  = E  E [ Y | Z ] 1 { Z ≤ t }  = Σ u u T Σ u E  Z 1 { Z ≤ t }  . Since E [ Y ] = 0 , we also have E [ Y 1 { Z >t } ] = − E [ Y 1 { Z ≤ t } ] . Hence (3.12) e F ( u, t ) = ∥ Σ u ∥ 2 ( u T Σ u ) 2 E [ Z 1 { Z ≤ t } ] 2 P ( Z ≤ t ) P ( Z > t ) . The u -dependent pr efactor is constant in t , so maximizing e F ( u, t ) over t is equivalent to maximizing the one-dimensional quantity t 7− → E [ Z 1 { Z ≤ t } ] 2 P ( Z ≤ t ) P ( Z > t ) . Step 2: maximize in t (fixed u ). By assumption, Z has a weakly symmetric log-concave density on R , with mean 0 . Applying Theorem 1 (the one-dimensional monotonicity theorem) to Z shows that the above one-dimensional functional is maximized at t = 0 . By ( 3.12 ), the same holds for e F ( u, t ) and hence for F ( u, t ) . This pr oves (i). Step 3: maximize in u at t = 0 . At t = 0 , weak symmetry gives P ( Z ≤ 0) = P ( Z > 0) = 1 / 2 , and symmetry yields E [ Z 1 { Z ≤ 0 } ] = − E [ Z + ] , where Z + = max { Z, 0 } . Moreover , for elliptical Y one has the scaling property Z = ⟨ Y , u ⟩ d = √ u T Σ u Z 0 , where Z 0 is the pr ojection in a fixed direction for the standardized scatter Σ = I (its law does not depend on u ). Thus E [ Z + ] = c 0 √ u T Σ u for the constant c 0 := E [( Z 0 ) + ] > 0 . Plugging into ( 3.12 ) at t = 0 gives e F ( u, 0) = ∥ Σ u ∥ 2 ( u T Σ u ) 2 ·  c 2 0 ( u T Σ u )  (1 / 2)(1 / 2) = 4 c 2 0 u T Σ 2 u u T Σ u . Therefor e maximizing u 7→ e F ( u, 0) over u  = 0 is equivalent to maximizing R ( u ) := u T Σ 2 u u T Σ u = (Σ 1 / 2 u ) T Σ(Σ 1 / 2 u ) ∥ Σ 1 / 2 u ∥ 2 , which is exactly the Rayleigh quotient of Σ evaluated at the vector Σ 1 / 2 u . Hence sup u  =0 R ( u ) = λ max (Σ) and the maximizers are pr ecisely the eigenvectors associated to λ max (Σ) . This proves (ii). □ 12 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU 3.2. Considerations on convexity. In this section, we show that the convexity assumed in Theor em 1 is a key condition. T o do so, we will proceed by providing an example of density of the form e − V ( x ) with V ( x ) continuous and symmetric, but not convex and we will pr ove that for this case c = 0 does not provide the maximum of the function f defined in the Theorem 1 . Example 2 (A symmetric non-log-concave law with two maximizers of f X ) . Let X have the (even) density p ( x ) =      21 8 , | x | ≤ 1 10 1 8 , 1 10 < | x | ≤ 2 , 0 , | x | > 2 . Then p integrates to 1 and, by symmetry , E [ X ] = 0 . Failure of log-concavity . For x = 0 , y = 0 . 3 one has p (0) = 21 8 and p (0 . 3) = 1 8 , but p (0 . 15) = 1 8 < p p (0) p (0 . 3) = √ 21 8 , so p is not log-concave. T wo global maximizers of f X . For t ∈ ( − 2 , 2) define f X ( t ) = E [ X 1 { X ≤ t } ] 2 P ( X ≤ t ) + E [ X 1 { X >t } ] 2 P ( X > t ) . We have E [ X ] = 0 and f X admits the explicit form (3.13) f X ( t ) = f X ( − t ) =               − 21 80 + 21 16 t 2  2  1 2 + 21 8 t  1 2 − 21 8 t  , 0 ≤ t ≤ 1 10 , (2 − t )( t + 2) 2 4(6 + t ) , 1 10 ≤ t ≤ 2 . By symmetry it suffices to consider t ≥ 0 . Differ entiating yields the following simplified expressions: f ′ X ( t ) =        441 t (1 − 5 t 2 )(2205 t 2 + 281) 50 (4 − 21 t ) 2 (4 + 21 t ) 2 , 0 < t < 1 10 , − ( t + 2)( t 2 + 8 t − 4) 2(6 + t ) 2 , 1 10 < t < 2 . In particular , on (0 , 1 10 ) one has f ′ X ( t ) > 0 , and on ( 1 10 , 2) the only critical point is the r oot of t 2 + 8 t − 4 = 0 . Therefor e f X has a unique critical point on (0 , 2) at t ⋆ = 2 √ 5 − 4 ∈  1 10 , 2  , and f X increases on [ 1 10 , t ⋆ ] and decr eases on [ t ⋆ , 2] . Comparing values shows f X ( t ⋆ ) > f X ( 1 10 ) , hence the global maximizers are exactly ± t ⋆ . 3.3. Counterexample for the halfspace functional (integer vertices). Let X = ( X 1 , X 2 ) ∼ Unif ( K ) where K ⊂ R 2 is the convex hexagon with integer vertices (listed counterclockwise) A = ( − 3 , 0) , B = ( − 1 , − 12) , C = (3 , − 8) , D = (3 , 0) , E = (1 , 12) , F = ( − 3 , 8) . This hexagon is visibly non-degenerate: the vertical edges F A and C D have lengths 8 and 8 . Define R ( t ) :=   E  X 1 { X 1 >t }    2 P ( X 1 > t ) P ( X 1 < t ) . For a uniform law on K , constants cancel and R ( t ) =    R R K ∩{ x>t } ( x, y ) dA    2 Area( K ∩ { x > t } ) Area( K ∩ { x < t } ) . A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 13 − 2 − 1 0 1 2 0 1 2 x p ( x ) Density p − 2 − 1 0 1 2 0 0 . 1 0 . 2 0 . 3 0 . 4 t f X ( t ) f X (two global maxima) F I G U R E 1 . Example 2 : a symmetric, non-log-concave density with global maximizers of f X at ± t ⋆ , where t ⋆ = 2 √ 5 − 4 ≈ 0 . 4721 . Centering. A shoelace/Green computation gives Area( K ) = 104 , Z Z K x dA = 0 , Z Z K y dA = 0 , hence E [ X ] = (0 , 0) . The cut t = 0 . The line x = 0 intersects B C at (0 , − 11) and E F at (0 , 11) , so K ∩ { x > 0 } = conv { (0 , − 11) , C, D , E , (0 , 11) } =: P 0 . One finds Area( P 0 ) = 52 , Z Z P 0 ( x, y ) dA =  199 3 , − 67 3  , Area( K ∩ { x < 0 } ) = 52 , hence R (0) =  199 3  2 +  67 3  2 52 · 52 = 22045 12168 ≈ 1 . 8117 . The cut t = 1 . The line x = 1 intersects B C at (1 , − 10) and meets E = (1 , 12) , so K ∩ { x > 1 } = conv { (1 , − 10) , C, D , (1 , 12) } =: P 1 . One finds Area( P 1 ) = 30 , Z Z P 1 ( x, y ) dA =  166 3 , − 100 3  , Area( K ∩ { x < 1 } ) = 74 , hence R (1) =  166 3  2 +  100 3  2 30 · 74 = 9389 4995 ≈ 1 . 8797 . Therefor e R (1) > R (0) , so even for a center ed law ( E X = 0 ) the halfspace functional in direction e 1 need not be maximized at the centered cut t = 0 (nor decreasing for t ≥ 0 ) without additional structure. Since R (1) > R (0) , the halfspace objective in direction e 1 need not be maximized at the centered cut t = 0 which does not agr ee with the case of elliptic case discussed in Theorem 2 . Remark 2. Throughout the paper we primarily worked with laws admitting a density on the whole space R d (e.g. log-concave densities of the form e − V ). Nevertheless, in our counterexamples and geometric computations we consider uniform laws on bounded convex sets. These can be viewed as densities supported on a convex body (with respect to Lebesgue measur e), and the associated halfspace functionals are still well-defined. 14 MIHAELA-ADRIANA NISTOR AND IONEL POPESCU x = 1 x = 0 x y F I G U R E 2 . Integer-vertex hexagon K and slices x = 0 (dotted), x = 1 (dashed). Moreover , such bounded-support examples can be approximated by fully supported laws: for instance, one may convolve Unif ( K ) with a small centered Gaussian noise N (0 , ε 2 I ) . Then the resulting law has a smooth, every- where positive density on R d and its associated halfspace functionals conver ge (as ε ↓ 0 ) to those of Unif ( K ) . In particular , any counter example exhibited on a bounded convex domain yields (by appr oximation) counterexamples within the class of fully supported densities. R E F E R E N C E S [1] Beatrice Acciaio and Irina Penner . Dynamic risk measures. arXiv , 2010. doi:10.48550/arXiv.1002.3794 . [2] Carlo Acerbi and Dirk T asche. Expected shortfall: a natural coherent alternative to value at risk. Economic notes , 31(2):379– 388, 2002. doi:10.1111/1468- 0300.00091 . [3] Jon A. W ellner Adrien Saumard. Log-concavity and str ong log-concavity: A review . arXiv , 2014. URL: https://arxiv. org/abs/1404.5886 , doi:10.48550/arXiv.1404.5886 . [4] Andrew Ang and Geert Bekaert. Regime switches in interest rates. Journal of Business & Economic Statistics , 20(2):163– 182, 2002. URL: https://econpapers.repec.org/article/besjnlbes/v_3a20_3ay_3a2002_3ai_3a2_3ap_ 3a163- 82.htm . [5] Philippe Artzner . Application of coherent risk measures to capital requirements in insurance. North American Actuarial Journal , 3(2):11–25, 1999. doi:10.1080/10920277.1999.10595795 . [6] Philippe Artzner , Freddy Delbaen, Jean-Mar c Eber , and David Heath. Coher ent measures of risk. Mathematical finance , 9(3):203–228, 1999. doi:10.1111/1467- 9965.00068 . [7] Mark Bagnoli and T ed Bergstr om. Log-concave probability and its applications. Economic Theory , 26(2):445–469, 2005. doi:10.1007/s00199- 004- 0514- 4 . [8] So Y eon Chun, Alexander Shapiro, and Stan Uryasev . Conditional value-at-risk and average value-at-risk: Estimation and asymptotics. Operations Research , 60(4):739–756, 2012. URL: https://pubsonline.informs.org/action/ doSearch?AllField=Conditional+value- at- risk+and+average+value- at- risk%3A+Estimation+and+ asymptotics . [9] Susanne Emmer , Marie Kratz, and Dirk T asche. What is the best risk measure in practice? a comparison of standard measures. arXiv pr eprint arXiv:1312.1645 , 2013. URL: . [10] Randall L Eubank. Optimal grouping, spacing, stratification, and piecewise constant approximation. SIAM Review , 30(3):404–420, 1988. doi:10.1137/1030092 . [11] Hans Föllmer and Alexander Schied. Convex measures of risk and trading constraints. Finance and Stochastics , 6(4):429– 447, 2002. doi:10.1007/s00780- 002- 0072- 0 . [12] Hans Föllmer and Alexander Schied. Convex measures of risk and trading constraints. Finance and stochastics , 6:429–447, 2002. doi:10.1007/s00780- 002- 0072- 0 . [13] Hans Föllmer and Alexander Schied. Stochastic Finance: An Introduction in Discr ete T ime . De Gruyter , Berlin, 4 edition, 2016. URL: https://www.degruyter.com/document/isbn/9783110463460/html . [14] Hans Föllmer and Stefan W eber . The axiomatic approach to risk measur es for capital determination. Annual Review of Financial Economics , 7:301–337, 2015. doi:10.1146/annurev- financial- 111914- 042031 . A PROPERTY OF LOG-CONCA VE AND WEAKL Y -SYMMETRIC DISTRIBUTIONS FOR TWO STEP APPROXIMA TIONS OF RANDOM V ARIABLES 15 [15] J. D. Hamilton. A new appr oach to the economic analysis of nonstationary time series and the business cycle. Econometrica , 57(2):357–384, 1989. doi:10.2307/1912559 . [16] James D. Hamilton and Raul Susmel. Autoregressive conditional heteroskedasticity and changes in regime. Journal of Econometrics , 64(1–2):307–333, 1994. doi:10.1016/0304- 4076(94)90067- 1 . [17] Chang-Jin Kim and Charles R. Nelson. State-Space Models with Regime Switching: Classical and Gibbs-Sampling Ap- proaches with Applications . MIT Press, Cambridge, MA, 1999. URL: https://mitpress.mit.edu/9780262112383/ state- space- models- with- regime- switching/ . [18] Alexander J McNeil, Rüdiger Frey , and Paul Embrechts. Quantitative risk management: concepts, techniques and tools-revised edition . Princeton university press, 2015. URL: https://mitpressbookstore.mit.edu/book/9780691166278 . [19] David Mease and V ijayan N Nair . Unique optimal partitions of distributions and connections to hazard rates and sto- chastic ordering. Statistica Sinica , 16(4):1299–1312, 2006. URL: https://www3.stat.sinica.edu.tw/statistica/ oldpdf/A16n411.pdf . [20] Frank Riedel. Dynamic coherent risk measures. Stochastic processes and their applications , 112(2):185–200, 2004. doi:10. 1016/j.spa.2004.03.004 . [21] R. T yrrell Rockafellar and Stanislav Uryasev . Optimization of conditional value-at-risk. Journal of Risk , 2(3):21–42, 2000. URL: https://www.risk.net/journal- of- risk/2161159/optimization- conditional- value- risk . [22] Alexander V T rushkin. Sufficient conditions for uniqueness of a locally optimal quantizer for a class of convex er- ror weighting functions. IEEE T ransactions on Information Theory , 28(2):187–198, 1982. URL: https://ieeexplore. ieee.org/search/searchresult.jsp?newsearch=true&queryText=Sufficient%20conditions%20for% 20uniqueness%20of%20a%20locally%20optimal%20quantizer . [23] Alexander V T rushkin. Monotony of Lloyd’s method II for log-concave density and convex error weighting func- tion. IEEE T ransactions on Information Theory , 30(2):380–383, 1984. URL: https://ieeexplore.ieee.org/search/ searchresult.jsp?newsearch=true&queryText=Monotony%20of%20Lloyd%27s%20method%20II%20for% 20log- concave%20density . [24] Johanna F Ziegel. Coherence and elicitability . Mathematical Finance , 26(4):901–918, 2016. doi:10.1111/mafi.12080 . U N I V E R S I T Y O F B U C H A R E S T F A C U LT Y O F M AT H E M ATI C S A N D C O M P U T E R S C I E N C E S T R . A C A D E M I E I N R . 1 4 , S E C T O R 1 , C . P . 0 1 0 0 1 4 , B U C H A R E S T , R O M A N I A Email address , Mihaela-Adriana Nistor: mihaelaadriana.nistor@gmail.com U N I V E R S I T Y O F B U C H A R E S T F A C U LT Y O F M AT H E M ATI C S A N D C O M P U T E R S C I E N C E S T R . A C A D E M I E I N R . 1 4 , S E C T O R 1 , C . P . 0 1 0 0 1 4 , B U C H A R E S T , R O M A N I A I N S T I T U T E O F M AT H E M AT I C S “ S I M I O N S T O I L O W ” O F T H E R O M A N I A N A C A D E M Y 2 1 C A L E A G R I V I T E I S T R E E T , 0 1 0 7 0 2 B U C H A R E S T , R O M A N I A B U C H A R E S T , R O M A N I A Email address , Ionel Popescu: ionel.popescu@fmi.unibuc.ro, ionel.popescu@imar.ro

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment