In this paper, we model dependence between operational risks by allowing risk profiles to evolve stochastically in time and to be dependent. This allows for a flexible correlation structure where the dependence between frequencies of different risk categories and between severities of different risk categories as well as within risk categories can be modeled. The model is estimated using Bayesian inference methodology, allowing for combination of internal data, external data and expert opinion in the estimation procedure. We use a specialized Markov chain Monte Carlo simulation methodology known as Slice sampling to obtain samples from the resulting posterior distribution and estimate the model parameters.
Deep Dive into Dynamic operational risk: modeling dependence and combining different sources of information.
In this paper, we model dependence between operational risks by allowing risk profiles to evolve stochastically in time and to be dependent. This allows for a flexible correlation structure where the dependence between frequencies of different risk categories and between severities of different risk categories as well as within risk categories can be modeled. The model is estimated using Bayesian inference methodology, allowing for combination of internal data, external data and expert opinion in the estimation procedure. We use a specialized Markov chain Monte Carlo simulation methodology known as Slice sampling to obtain samples from the resulting posterior distribution and estimate the model parameters.
Modelling dependence between different risk cells and factors is an important challenge in operational risk (OpRisk) management. The difficulties of correlation modelling are well known and, hence, regulators typically take a conservative approach when considering correlation in risk models. For example, the Basel II OpRisk regulatory requirements for the Advanced Measurement Approach, BIS (2006) p.152, states "Risk measures for different operational risk estimates must be added for purposes of calculating the regulatory minimum capital requirement. However, the bank may be permitted to use internally determined correlations in operational risk losses across individual operational risk estimates, provided it can demonstrate to the satisfaction of the national supervisor that its systems for determining correlations are sound, implemented with integrity, and take into account the uncertainty surrounding any such correlation estimates (particularly in periods of stress). The bank must validate its correlation assumptions using appropriate quantitative and qualitative techniques."
The current risk measure specified by regulatory authorities is Value-at-Risk (VaR) at the 0.999 level for a one year holding period. In this case simple summation over VaRs corresponds to an assumption of perfect dependence between risks. This can be very conservative as it ignores any diversification effects. If the latter are allowed in the model, capital reduction can be significant providing a strong incentive to model dependence in the banking industry. At the same time, limited data does not allow for reliable estimates of correlations and there are attempts to estimate these using expert opinions. In such a setting a transparent dependence model is very important from the perspective of model interpretation, understanding of model sensitivity and with the aim of minimizing possible model risk. However, we would also like to mention that VaR is not a coherent risk measure, see Artzner, Delbaen, Eber and Heath (1999). This means that in principal dependence modelling could also increase VaR, see Embrechts, Nešlehová and Wüthrich (2009) and Embrechts, Lambrigger and Wüthrich (2009).
Under Basel II requirements, the financial institution intending to use the Advanced Measurement Approach (AMA) for quantification of OpRisk should demonstrate accuracy of the internal model within 56 risk cells (eight business lines times seven event types). To meet regulatory requirements, the model should make use of internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. The definition of OpRisk, Basel II requirements and the possible Loss Distribution Approach for AMA were discussed widely in the literature, see e.g. Cruz (2004), Chavez-Demoulin, Embrechts and Nešlehová (2006), Frachot, Moudoulaud and Roncalli (2004), Shevchenko (2009). It is more or less widely accepted that under the Loss Distribution Approach of AMA Basel II requirements, the banks should quantify distributions for frequency and severity of OpRisk for each business line and event type over a one year time horizon. These are combined into an annual loss distribution for the bank top level (as well as business lines and event types if required) and the bank capital (unexpected loss) is estimated using the 0.999 quantile of the annual loss distribution. If the severity and frequency distribution parameters are known, then the capital estimation can be accomplished using different techniques. In the case of single risks there are: hybrid Monte Carlo approaches, see Peters, Johansen and Doucet (2007); Panjer Recursions, see Panjer (1981); integration of the characteristic functions, see Luo and Shevchenko (2009); Fast Fourier Transform techniques, see e.g. Embrechts and Frei (2009), Temnov and Warnung (2008). To account for parameter uncertainty, see Shevchenko (2008), and in multivariate settings Monte Carlo methods are typically used.
The commonly used model for an annual loss in a risk cell (business line/event type) is a compound random variable,
Here t = 1, 2, . . . , T, T + 1 in our framework is discrete time (in annual units) with T + 1 corresponding to the next year. The upper script j is used to identify the risk cell. The annual number of events N (j) t is a random variable distributed according to a frequency counting distribution P (j) (•|λ (j) t ), typically Poisson, which also depends on time dependent parameter(s) λ (j) t . The severities in year t are represented by random variables X (j) s (t), s ≥ 1, distributed according to a severity distribution F (j) (•|ψ (j) t ), typically lognormal, Weibull or generalized Pareto distributions with parameter(s) ψ (j) t . Note, the index j on the distributions P (j) and F (j) reflects that distribution type can be different for different risks, for simplicity of notation we shall omit this j, using P (•|λ generically represent distribution (mod
…(Full text truncated)…
This content is AI-processed based on ArXiv data.