Sense and Sensitivity: An Input Space Odyssey for Asset-Backed Security Ratings

The rating of asset-backed securities is partly based on quantitative models for the defaults and prepayments of the assets in the pool. This quantitative approach contains a number of assumptions and estimations of input variables whose values are affected by uncertainty. The uncertainty in these variables propagates through the model and produces uncertainty in the ratings. The objectives of this paper are twofold. Firstly, we advocate the use of uncertainty and sensitivity analysis techniques to enhance the understanding of the variability of the ratings due to the uncertainty in the inputs used in the model. Secondly, we propose a novel rating approach called global rating, that takes this uncertainty in the output into account when assigning ratings to tranches.


Introduction
Asset-backed securities (ABSs) are securities created through a securitization process whose value and income payments are backed by a specific pool of underlying assets. Illiquid assets that cannot be sold individually are pooled together by the originator (Issuer) and transferred to a shell entity specially created to be bankruptcy remote, (a so called Special Purpose Vehicle (SPV)). The SPV issues notes (liabilities) to investors with distinct risk return profiles and different maturities: senior, mezzanine, and junior notes. This technique is called tranching of the liability. Cashflows generated by the underlying assets are used to service the notes; the risk of the underlying assets results to be diversified because each security now is representing a fraction of the total pool value. Figure 1 shows a general ABS structure.
A securitisation credit rating is an assessment of the credit risk of a securitisation transaction, addressing how well the credit risk of the assets is mitigated by the structure. The rating process is based on both quantitative assessment and a qualitative analysis of how the transaction mitigates losses due to defaults. For the quantitative assessment, different default scenarios, combined with other assumptions, for example, prepayments, are generated using more or less sophisticated models.
Typically the input parameters to this assessment are unknown and estimated from historical data or given by expert opinions. In any way, the values used for the parameters are uncertain and these uncertainties propagates through the model and generates uncertainty in the rating output. This introduces uncertainty into the assessment and it therefore becomes important to understand the ratings parameter sensitivity. For an introduction to ABS and the risks and the rating methodology see (Jönsson and Schoutens, 2010), (Jönsson et al., 2009), and (Campolongo et al., 2013).
There has been an increased attention to the rating of asset-backed securities during the credit crisis 2007-2008 due to the enormous losses anticipated by investors and the large number of downgrades among structured finance products. Rating agencies have been encouraged to sharpen their methodologies and to provide more clarity to the limitations of their ratings and the sensitivity of those ratings to the risk factors accounted for in their rating methodologies (see, for example, Global Stability Report, April 2008, (IMF, 2008 p. 81).
Moody's, for example, introduced in (Moody's, 2009) two concepts, V Scores and Parameter Sensitivity. Moody's V Scores provide a relative assessment of the quality of available credit information and the potential variability around the various inputs to rating determination. The intention with the V Scores is to rank transactions by the potential for rating changes owing to uncertainty around the assumptions. Moody's Parameter Sensitivity provides a quantitative calculation of the number of rating notches that a rated structured finance security may vary, if certain input parameters used differed. Moody's analysis is performed by varying one input at a time, while holding all the others fixed at predetermined values. Typically, it is done by stressing just the two key input parameters that have the greatest impact within the sector, for example, the mean portfolio default rate and the mean recovery rate. This is a local approach which does not analyse all the input parameters and is not able to detect the importance of interactions among inputs. In contrast, in our paper we want to explore the whole input space.
The objectives are twofold. Firstly, we advocate the use of uncertainty and global sensitivity analysis techniques to enhance the understanding of the variability of the ratings due to the uncertainty in all the input parameters. Uncertainty analysis quantifies the variability in the output of interest due to the variability in the inputs. Global sensitivity analysis assesses how the uncertainty in the output can be allocated to its different sources. Through global sensitivity analysis, we quantify the percentage of output variance that each input or combination of inputs accounts for. Furthermore, we investigate the importance of interactions among different inputs.
Secondly, we propose a novel rating approach called global rating, that takes this uncertainty in the output into account when assigning ratings to tranches. The global ratings should therefore become more stable and reduce the risk of cliff effects, that is, that a small change in one or several of the input assumptions generates a dramatic change in the rating. The global rating methodology proposed gives one answer of a way forward for the rating of structure finance products.
The rest of the paper is outlined as follows. In the next section, we introduce the ABS structure we are going to use as example: we describe the basic steps of modelling the cashflows produced by the asset pool, we point out how these cashflows are distributed to the liabilities and we outline the procedure to get ratings. A description of the general elements of global sensitivity analysis is provided in Section 3, with a particular attention to the techniques used in this paper. In Section 4, we apply uncertainty and global sensitivity analysis techniques to the ratings exercise of the example structure. The global rating, introduced in Section 5, is an attempt to take into account the uncertainty in the ratings process when assigning credit ratings to ABSs. The paper ends with conclusions.
The quantitative analysis relies on modelling of the cashflows produced by the assets (based on default and prepayment models of different level of sophistication), the collection of these cashflows, and the distribution of the cashflows to the liabilities according to a payment priority (waterfall) described in the deal's prospectus.
In this section, the ABS structure used in the numerical experiment is introduced; the basic steps of modelling the cashflows produced by the assets in the pool (default models) are described; the collection of the cashflows, and the distribution of these cashflows to the liabilities are pointed out; finally, the procedure to get ratings of asset-backed securities is explained.

The ABS Structure for the Experiment
Throughout the paper we assume that the collateral pool is homogeneous, i.e., that all the constituents of the pool are identical with respect to initial amount, maturity, coupon, amortisation, and payment frequency, (see Table 1), and with respect to risk profile (i.e. probability of default). This implies that all the assets in the pool are assumed to behave as the average of the assets in the pool. We also assume the pool to be static, i.e. no replenishment is done.
This collateral pool is backing three classes of notes: A (senior), B (mezzanine), and C (junior). The details of the notes are given in Table 2 together with other structural characteristics. To this basic liability structure we have added a cash reserve account. The reserve account balance is initially zero and is founded by excess spread.
The priority of payments of the structure, the waterfall, is presented in Table 3. The waterfall is a so called combined waterfall where the available funds at each payment date constitutes of both interest and principal collections.

Cashflow Modelling
We denote by m t , = 0,1, , T m m  , the payment date at the end of month m , with 0 = 0 t being the closing date of the deal and = T m t T being the final legal maturity date.

Cashflow Collection
The cash collection each month from the asset pool consists of interest payments and principal collection (scheduled repayments) which together with the principal balance of the reserve account constitute available funds.
We begin by modelling the asset behaviour for the current month, say m . The number of performing loans in the pool at the end of month m will be denoted by ( ) N m . We denote by ( ) D n m the number of defaulted loans in month m . The following relation holds true for all m : The outstanding principal amount of an individual loan at the end of month m , after any amortisation, is denoted by ( ) B m . This amount is carried forward to the next month and is, therefore, the current outstanding principal balance at the beginning of (and during) month 1 m  . Denote by ( ) A B m the scheduled principal amount repaid ( A stands for amortised) in month m . The outstanding principal amount of an individual loan at the end of month m: The total outstanding principal amount of the pool at the end of month m is: Defaulted principal is based on previous months ending principal balance times number of defaulted loans in current month: Interest collected in month m is calculated on performing loans, i.e., previous months ending number of loans less defaulted loans in current month: where L r is the loan interest rate. It is assumed that defaulted loans pay neither interest nor principal.
Scheduled repayments are based on the performing loans from the end of previous month less defaulted loans: We will recover a fraction of the defaulted principal after a time lag, RL T , the recovery lag: The available funds in each month, assuming that total principal balance of the cash reserve account ( CR B ) is added, are: The total outstanding principal amount on the asset pool has decreased with: and to make sure that the notes remain fully collateralised we have to reduce the outstanding principal amount of the notes with the same amount.

Payment Waterfall
The senior expenses represent payments to transaction parties, e.g. the issuer and the servicer, that are necessary for the structure to function properly. The senior expenses due to be paid during the month m are based on the outstanding pool balance during the month, 1 m  , (plus any unpaid fees from previous month): is the interest rate on any shortfall (see Table 2).
The actual amount paid to the issuer is: After payment of the senior expenses, the available funds are updated: We use the superscript (1) in (1) ( ) F A m to indicate that it is the available funds after item 1 in the waterfall.
The interest due to be paid to the class A notes is based on the current outstanding principal balance of the A notes at the beginning of month m , i.e. before any principal redemption. Denote by ( ) ( 1) A C B m the outstanding balance at the end of month 1 m  , after any principal redemption. This amount is carried forward and is, therefore, the current outstanding balance at the beginning of (and during) month m . To this amount, we add any shortfall from the previous month. The interest due to be paid is: is the fixed interest rate for the A notes.
We assume the interest rate on shortfalls is the same as the note interest rate.
The interest paid to the A notes depends of course on the amount of available funds: If the available funds are not enough to cover the interest due we get a shortfall that is carried forward to the next month: After class A interest payments, the available funds are updated: The interest payments to the B notes are calculated identically.
The principal payments to the notes are based on the total principal reduction of the collateral pool ( ) Red P m calculated in equation (2). The allocation of principal due to be paid to the notes is supposed to be done sequentially, which means that principal due is allocated in order of seniority. In the beginning, principal due is allocated to the class A notes. Until the class A notes has been fully redeemed no principal is paid out to the other classes of notes. After the class A notes are fully redeemed, the class B notes are started to be redeemed, and so on. Note that we here are discussing the calculation of principal due to be paid. The actual amount of principal paid to the different notes depends on the available funds at the relevant level of the waterfall. That is: , where ( ) ( 1) A SF P m is principal shortfall from previous month.
The amount paid is: Finally, we have to update the outstanding balance after the principal redemption: and available funds: Since we apply a sequential allocation of principal due, no principal will be paid to the B and C notes until the A notes are fully redeemed.
Note that the total principal reduction amount in month m allocated to the notes is the sum of the principal reduction allocated to the A notes, ( ) ( ) A Red P m , and the principal reduction allocated to the B and C notes, ( , ) ( ) From the above, it is clear that the portion of the total principal reduction allocated to class A is: From the above expression we see that there are two cases to take into account: The class B principal due is: Next item in the waterfall is the reimbursement of the reserve account. The reserve account balance after reimbursement is: where the target balance on the reserve account is given as a fraction ( ( )

CR Targ q
) of the outstanding pool balance (see Table 2): After the reserve account reimbursement, the available funds are updated: The interests to the C notes are calculated as for the interest payments to the A and B notes. The payment of principal to the C notes is identical to the B notes, with the small change that one has to make sure that no principal is paid until the B notes are fully redeemed.
Any residual amount left is paid out in the final item in the waterfall, class C additional returns,

Default Modelling
Different default scenarios are generated by first sampling a cumulative portfolio default rate from a default distribution and then distribute this default rate over time with the help of a default curve. The default distribution of the pool is assumed to follow a Normal Inverse distribution and the default curve is modelled by the Logistic function. These distributions are characterised by input parameters which have to be given by expert opinions or estimated from historical data on the performance of asset pools with similar characteristics as the asset pool under consideration. Thus, the quantitative analysis introduces an exposure to parameter uncertainty. Under the assumption to set up the default modelling by using these two distributions we are not focusing on model uncertainty. The impact of model choice has been presented in (Jönsson and Schoutens, 2010), and (Jönsson et al., 2009).

The default curve -Logistic Function
The default curve represents the cumulative portfolio default rate's evolution over time. It provides the percentage of the total cumulative default rate that will be applicable in each month. The curve should therefore be monotonically increasing and the slope of the curve should be always non negative.
The function used to model the default timing will be a common sigmoid curve: the Logistic function. We are using a four parameters version: where , , a b c , 0 t are positive constants and . Parameter a is the asymptotic cumulative default rate; b is a curve adjustment or offset factor; c is a time constant (spreading factor); and 0 t is the time point of maximum marginal credit loss. The shape of the Logistic function and the influence of the parameters are illustrated in Figure 2. We can observe that the timing of the peak of the monthly default rate to a large extent is controlled by 0 t and that the sharpness of the peak is controlled by the spreading factor c . The curve adjustment factor b shifts the peak around 0 t . If = 1 b , the curve becomes symmetric around 0 t .
Note that the Logistic default curve has to be normalised such that it starts at zero (initially no defaults in the pool) and ( ) F T equals the expected cumulative default rate. One default scenarios is thus generated by sampling a value for a from the default distribution.

The default distribution -Normal Inverse
Let ( ) PDR T denote the portfolio default rate at time T of our large homogeneous portfolio. The distribution of ( ) PDR T is given by the Normal Inverse distribution: where 0% 100% y   ,  is the obligor correlation, and ( ) p T is the probability of default by T of a single obligor in the pool. The Normal Inverse distribution is derived as an approximation to the distribution of the portfolio default rate at maturity T when the Gaussian one-factor model is used to model the defaults in a large homogeneous portfolio where the number of assets in the pool is assumed to grow to infinity. The default distribution in equation (8) is a function of the obligor correlation,  , and the default probability, ( ) p T , which are unknown and unobservable. Instead of using these parameters as inputs it is common to fit the mean and standard deviation of the distribution to the mean and standard deviation, respectively, estimated from historical data (see, for example, (Moody's, 2007b) and (Raynes and Rutledge, 2003)). Let us denote by cd  and cd  the estimated mean and standard deviation, respectively. The mean of the distribution is equal to the probability of default for a single obligor, ( ) p T , so ( ) = cd p T  . As a result there is only one free parameter, the correlation  , left to adjust to fit the distribution's standard deviation to cd  .

Ratings of ABSs
Credit ratings are based on assessments of either expected loss or probability of default. The expected loss assessment incorporates assessments of both the likelihood of default and the loss severity, given default. The probability of default approach assesses the likelihood of full and timely payment of interest and the ultimate payment of principal no later than the legal final maturity. In this paper, the expected loss rating approach under the assumption of a large, granular portfolio, is being used, following (Moody's, 2006).
The ratings are based on cumulative expected loss (EL) and expected weighted average life (EWAL). Expected loss is based on the relative net present value loss (RPVL) which is calculated by discounting the cashflows (both interest and principal) received on that note and by comparing it to the initial outstanding amount of the notes.
The present value of the cashflows under the A notes, for a given scenario j j P P m  is the interest and principal payment received, respectively, in month m under scenario j  (see Section 2.2). We have included j  in the expressions to emphasize that these quantities depend on the scenario.
Thus, for the A notes the relative present value loss under scenario j  is given by: is the initial nominal amount of the A tranche.
The expected loss estimate using M number of scenarios is: m  is the current outstanding amount of the A notes at maturity (month T m ) after any amortisation. Thus, we assume that if the notes are not fully amortised after the legal maturity, any outstanding balance is amortised at maturity. Since we assume monthly payments, the factor 1/12 is used to express WAL in years.
Exactly as in the case with the expected loss we apply Monte Carlo simulations to estimate the EWAL: The rating of the note is found from Moody's Idealised Cumulative Expected Loss Table which maps the Expected Average Life and Expected Loss combination to a specific quantitative rating.
Each run of the rating algorithm is rather time consuming as the expected loss and the expected average life of the notes are themselves the results of Monte Carlo simulations. Thus, in order to speed up the sensitivity analysis experiment, we make use of Quasi-Monte Carlo simulations based on Sobol sequences in order to sample a value for the cumulative default rate from the Normal Inverse distribution. (See (Kucherenko, 2007), (Kucherenko, 2008), (Kucherenko et al., 2009), (Kucherenko et al., 2011) for more information on Sobol sequences and their applications.)

Global Sensitivity Analysis
Sensitivity analysis (SA) is the study of how the variation (uncertainty) in the output of a statistical model can be attributed to different variations in the input of the model. In other words, it is a technique for systematically changing input variables in a model to determine the effects of such changes. Very often, sensitivity analysis is performed by varying one input at a time, while holding all the others fixed at predetermined values. In most instances the sensitivity measure is chosen to be a partial derivative and inputs are allowed only small variations around a nominal value (local sensitivity analysis). However, when the additivity of the model is not known a priori and interactions among the inputs cannot be excluded, an analysis of this kind is unreliable. In contrast to the local approach, global sensitivity analysis does not focus on the model sensitivity around a single point but aims at exploring the sensitivity across the whole input space. Usually global analysis is performed by allowing simultaneous variations of the inputs thus allowing to capturing also potential interactions effects among the various inputs. For a general introduction to global sensitivity analysis, see (Saltelli et al., 2004) and (Saltelli et al., 2008).
In this section we introduce the two sensitivity methods that we are going to apply to our rating exercise: the elementary effect method and the variance based method.
The elementary effect method belongs to the class of screening methods. Screening methods are employed when the goal is to identifying the subsets of influential inputs among the many contained in a model, relying on a small number of model evaluations.
The variance based method is more accurate but computationally more costly and therefore not always affordable. Through the variance based method it is possible to identify the parameters that contribute the most to the total variance in the output.
In our analysis we follow a two-steps approach. First we apply the elementary effect method to identify the subset of input parameters that can be viewed as non-influential. The non-influential ones will be given fixed values. Then, we apply the variance based technique to quantify and distribute the uncertainty of our model output among the influential input parameters.
In the present section, we give a general description of the elementary effect method and the variance based technique. The notation adopted is the following.
We assume that there are k uncertain input parameters 1 2 , , , k X X X  (assumed to be independent) in our model, and denote by Y the output of our generic model. Y is a function of the input parameters, which we write Examples of input parameters in our model are the mean and standard deviation of the default distribution. Example of outputs are the expected loss or expected weighted average life of a tranche.
To each input parameter we assign a range of variation and a statistical distribution. For example, we could assume that 1 X is the mean of the default distribution and that it takes values in the range [5%,30%] uniformly, that is, each of the values in the range is equally likely to be chosen. We could of course use non-uniform distributions as well, for example, an empirical distribution.
These input parameters and their ranges create an input space of all possible combinations of values for the input parameters.

Elementary Effects
A very efficient method within the screening methods in identifying important inputs with few simulations is the elementary effects method (EE method). It is very simple, easy to implement and the results are clear to be interpreted. It was introduced in (Morris, 1991) and has been refined by (Campolongo et al., 2007). Because of the complexity of its structure, the ABS is computationally expensive to evaluate and the EE method is very well suited for the sensitivity analysis of the ABS model's output.
The method starts with a one-at-a-time sensitivity analysis. It computes for each input parameter a local sensitivity measure, the so-called Elementary Effect (EE), which is defined as the ratio between the variation in the model output and the variation in the input itself, while the rest of the input parameters are kept fixed. Then, in order to obtain a global sensitivity measure, the one-at-a-time analysis is repeated several times for each input, each time under different settings of the other input parameters, and the sensitivity measures are calculated from the empirical distribution of the elementary effects.
To apply the EE method we map each of the input parameter ranges to the unit interval [0,1] such that the input space is completely described by a k -dimensional unit cube. In order to estimate the sensitivity measures, a number of elementary effects must be calculated for each input parameter.
Morris suggested an efficient design that builds r trajectories in order to compute r elementary effects. Each trajectory is composed by ( 1) k  points in the input space such that each input changes value only once and two consecutive points differ only in one component of a step equal to  .
Once a trajectory has been generated, the model is evaluated at each point of the trajectory and one elementary effect for each input can be computed.
Let ( ) l X and ( 1) l  X , with l in the set {1, 2, , } k  , denote two points on the th j trajectory. These points differ in the th i component such that ( 1) 1 2 = ( , , , , , ) The EE of input i is: Practically, according to the recent work of (Campolongo et al., 2011), a large number of different trajectories (e.g. 1 000) is constructed and then r of them are selected in order to get the maximum spread in the input space. The number of trajectories ( r ) depends on the number of inputs and on the computational cost of the model; typical values of r are between 4 and 10 (see (Morris, 1991) and (Campolongo et al., 2007)). See (Campolongo et al., 2011) for the all details about the design that builds the r optimized trajectories of ( 1) k  points in the input space.
For each input r elementary effects are then estimated, one per trajectory. Note that elementary effects obtained from different trajectories are independent since the starting points of the trajectories are independent and thus the trajectory points.
Starting from the absolute values of the elementary effects, the following sensitivity measure is used to assess the importance of each parameter in the model.

Variance Based Method
We begin our discussion on variance based method by noting that the variance of our generic output, ( ) V Y , can be decomposed into a main effect and a residual effect: Here is the conditional expectation given i X calculated over all input parameters and X i V denotes the variance calculated with respect to i X . Equivalently, is the variance with respect to all parameters conditional on i X .
The first term in equation (14) is of most interest to us. It tells how much the mean of the output varies when one of the input parameters ( i X ) is fixed. A large value of ( ( | )) i V E Y X indicates that i X is an important parameter contributing to the output variance. When we divide this variance with the unconditional variance ( ) V Y we obtain the first order sensitivity index with respect to i X : These first order sensitivity indices represent the main effect contribution of each input. When For instance, the second order sensitivity index, , i j S , quantifies the extra amount of the variance corresponding to the interaction between inputs i and j that is not explained by the sum of their individual effects. The second order sensitivity index is: : : In general, for a model output depending on k independent inputs, the following relation has been shown to hold: where i S are the first order sensitivity indices, , i j S are the second order sensitivity indices, and so on until 1,2, ,k S  , which is the k th order sensitivity index.
The sum of all the terms in expression (17) that contain i describes ' i X s total contribution to the output variance. This is called the 'total effect term', T i S and is expressed as follows: For details, see (Saltelli, 2002), (Saltelli et al., 2004), (Saltelli et al., 2008), and(Sobol', 1993). For the technical computation of the sensitivity indices see (Saltelli et al., 2010) and (Ratto and Pagano, 2010).

The SA Experiment
In order to apply the elementary effect method we first have to identify the outputs we want to study and the inputs which are controllable (i.e. known) and those ones which are uncontrollable (i.e. unknown). We also have to identify suitable ranges for the uncontrollable inputs.
The sensitivity analysis (SA) is performed on the structure presented in Section 2.1-2.2 and the default model presented in Section 2.3. The fundamental output in our study is the rating of the ABSs. These ratings are derived from the Expected Average Life and the Expected Loss of the notes as calculated in Expression (10) and (11), respectively. The SA thus investigates how the uncertainty in each input parameter contributes to the uncertainty of the Expected Average Life and Expected Loss and hence the ratings. To get one rating, 14 2 scenarios are used under each parameter setting of the inputs. This guarantees the convergence of the ABS model.
Without loss of generality, the investor is assumed to be informed about the collateral pool's characteristics and the structural characteristics given in Table 1 and Table 2, respectively, and the waterfall in Table 3. These are treated as controllable inputs.
Assuming the default distribution of the pool to follow a Normal Inverse distribution and the default curve to be modelled by the Logistic function, the uncertainty in the SA is not related to the model choice but to the parameters of the cumulative default rate distribution, the default timing (the Logistic function), and the recoveries:  the recovery rate ( RR ) and the recovery lag ( RL T ).
The input ranges are summarised in Table 4 and in the subsequent sections we will give some motivation to our choice of ranges.
Ranges Associated with cd  and cd  The mean and standard deviation of the default distribution are typically estimated using historical data provided by the originator of the assets (see (Moody's, 2005) and (Raynes and Rutledge, 2003)). In our SA we will assume that the mean cumulative default rate at maturity T ( cd  ) takes values in the interval [5%,30%] . This is equivalent to assuming that the probability of default before T for a single asset in the pool ranges from 5% to 30% . (Recall that the mean of the Normal Inverse distribution is equal to the probability of default of an individual asset).
We make the range of the standard deviation ( cd  ) a function of cd  by using a range for the coefficient of variation, / cd cd the default mean than for low values of the mean, which implies that we get higher correlation in the pool for high values of the mean than for low values, see Figure 3.
Ranges Associated with b , c , and 0 t in the Logistic Function The parameters can be estimated from empirical loss curve by fitting the Logistic curve to a historical default curve (see (Raynes and Rutledge, 2003)).
Because we want to cover a wide range of different default scenarios we have chosen the following parameter ranges: 3 3 Inspecting the behaviour of the Logistic functions in Figure 2 provides some insight to the possible scenarios generated with these parameter ranges and gives an intuitive understanding of the different parameters influence on the shape of the curve.

Ranges Associated with Recovery Rate and Recovery Lag
Recovery rates and recovery lags are very much dependent on the asset type in the underlying pool and the country where they are originated. For SME loans, for example, Standard and Poor's made the assumption that the recovery lag is between 12 months to 36 months depending on the country (see (Standard and Poor's, 2004a)). Moody's uses different recovery rate ranges for SME loans issued in, for example, Germany ( 25% 65%  ) and Spain ( 30% 50%  ) (see (Moody's, 2009)).
The range associated with recovery lag RL T has been fixed to be equal to [6,36] months and with the recovery rate to be equal to [5%,50%] .

Uncertainty Analysis
The empirical distributions of the ratings of the tranches in Figure 4 can be used to obtain information on the uncertainty in the model.
All three histograms show evidence of dispersion in the rating outcomes. The dispersion is most significant for the mezzanine tranche. The ratings of the senior and the junior tranches behave in a more stable way: we get ratings with low degree of risk 78% of times for the A notes, and the C notes is unrated 51% of time. This is not surprising because losses are allocated to the notes in reverse order of seniority, it is the junior tranche that absorbs any losses first.
The uncertainty analysis highlights an important point: the uncertainty in the rating of the mezzanine tranche is very high.
As a measure of the ratings dispersion we look at the interquartile range, which is defined as the difference between the 75 th percentile and the 25 th percentile. Ratings percentiles are provided in Table 5. It does not come as a surprise that the range is the highest for the B notes, 9 notches, given the very dispersed empirical distribution shown in Figure 4. From Table 5, we can also conclude that the interquartile range is equal to five and three notches for the A notes and the C notes, respectively. This dispersion in the rating distribution is of course a result of the uncertainty in the expected losses and expected average lives which are used to derive the ratings of each note.
In the next section, we apply sensitivity analysis methods to assess which sources of uncertainty among the input parameters are contributing the most to the uncertainty in the outputs.

Sensitivity Analysis
Sensitivity analysis assesses the contribution of each input parameter to the total uncertainty of the outcome and the importance of the interactions among parameters. We analyse six outputs: the expected loss and the expected weighted average life of each of the three classes of notes. Due to the fact that the ABS model is computationally expensive we will start our sensitivity analysis by using the elementary effect method to identify non-influential input parameters. Each of the non-influential inputs will be fixed to a value within its range. After that, the variance based method will be applied to quantify and to distribute the uncertainty of our model outputs among the input parameters identified to be influential.
The starting point for both methods is the selection of a number of settings of the input parameters. The number of SA evaluations to get sensitivity analysis results depends on the technique used. In the elementary effect method, we select 80 settings of input parameters. We apply the method with using = 10 r trajectories of 4 points. Having = 7 k input parameters the total number of SA model evaluations is 80 ( = ( 1) N r k  ). In the variance based method, we select 8 2 settings of input parameters. These choices have been demonstrated to produce valuable results in general applications of elementary effects and variance based method (see (Ratto and Pagano, 2010)). For each setting of the input parameters, the ABS model runs 14 2 times to provide the outputs and the ratings.

Elementary Effects
For a specific output, the elementary effect method provides one sensitivity measures, * i  , for each inputs. These sensitivity measures are used to rank each input parameter in order of importance relative the other inputs. The input parameter with the highest * i  value is ranked as the most important one for the variation of the output under consideration. It is important to keep in mind that the ranking of the inputs are done for each output separately.
In Figure 5, bar plots visually depict the rank of the input parameters for each of the six outputs. The least influential parameters across all outputs are the recovery lag and the Logistic function's b parameter. Hence they could be fixed without affecting the variance of the outputs of interest and therefore the uncertainty in the ratings to a great extent.
Among the other input parameters, the mean of the default distribution ( cd  ) is clearly the most important input parameter over all for all three notes. It is characterized by high *  values for both the expected loss and the expected average life of all the notes. This highlights the strong influence the mean default rate assumption has on the assessment of the ABSs. The only exception from ranking the mean default rate as the most influential input is the Expected Loss of the A notes. Here the coefficient of variation is ranked the highest with the recovery rate as second and the mean default rate as third.
Changing the thickness of the junior tranche In the present section, we investigate if the results of elementary effect method are affected by changing the ABS structure. We thus increase the initial principal amount of the C notes according to the Table 6. We kept the initial principal amount of the B notes, reducing only the initial principal amount of the A notes. All other characteristics of the structure are kept as previously. In this way, we increase the credit enhancement or loss cushion protection of the mezzanine tranche.
The bar plots in Figure 6 depict the rank of the inputs according to the *  values for the old and the new structure.
The rankings of the input parameters for the new structure are consistent and coherent with the result obtained for the original structure.

Variance Based Method
In the elementary effect analysis performed above, two out of seven input parameters were identified as non-influential. These two inputs can therefore be fixed to values within their ranges. We have chosen to let = 1 b and = 18 RL T . For the other input parameters, we are going to apply variance based method to quantify their contribution to the outputs variances.
We select now 8 2 settings of input parameters, we run our model for each of them and finally we obtain the first order sensitivity indices. Figure 7 shows a decomposition of the output variance highlighting the main contributions due to the individual input parameters (first order effects) and due to interactions (second and higher order effects which are indicated in white in Figure 7).
For the B and C notes the mean cumulative default rate, cd  , is clearly contributing the most to the variance, accounting for approximately more than 60% and more than 70% , respectively. The uncertainty analysis performed earlier pointed out that the uncertainty in the rating of the mezzanine tranche is very high. The first order sensitivity indices indicate that improving the knowledge of cd  can help to reduce the variability of the outputs. In fact, if we could know the value of cd  for certain, then the variance in expected loss and expected average life of the B notes could be reduced by more than 60% .
For the senior tranche, the first order indices indicate that cd  is the largest individual contributor to the variation in the expected loss of the A notes (17% ) and that c is the largest individual contributor to the variation in expected average life of the A notes ( 24% ). However, large parts of the variation in expected loss and expected average life of the A notes come from interaction among input parameters. This indicates that the first order indices cannot solely be used to identify the most important inputs and more sophisticated sensitivity measures must be used.
When interactions are involved in the model, we are not able to understand which input is the most responsible of them by just using the first order effect contributions. Figure 8 depicts the decomposition of the variance including explicitly the second order effect contributions due to the pairwise interactions between input parameters. From the partition of the expected loss of the A notes variance we can clearly see that, for example, the interaction between cd  and the coefficient of variation and the interaction between cd  and RR significantly contribute to the total variance with 15% and 10% , respectively. For the other outputs the first order indices are in most cases larger than the second order effects.
Less than 5% for the mezzanine and junior tranche and less than 15% in the senior tranche refer to interactions among more than two parameters (the white slice Figure 8): thus the sum of the first and second order effects can be considered an acceptable approximation of the total index. Finally in Figure 9, the approximation of the total effect indices for the input parameters for the different outputs is presented. For all outputs it is clear that the mean of the default distribution is the most influential input parameter.

Global Rating
In the previous section we saw that the uncertainty in the input parameters propagates through the model and generates uncertainty in the outputs. The rating of the A notes, for example, shown in Figure 4is ranging from Aaa to Unrated. The question is how to pick the rating of the A notes if we have this variability.
By using sensitivity analysis we have been able to quantify this uncertainty and identify its sources. If we knew the true value of the most important inputs, we could eliminate the most of the variability in the model. In practise, these values are unknown to us. This implies that we have an intrinsic problem in the rating of ABSs.
In this section, we propose to use a new rating approach that takes into account the uncertainty in the outputs when rating ABSs. This new approach should be more stable, reducing the risk of cliff effects when assigning ratings to tranches. The cliff effect refers to the risk that a small change in one or several of the input assumptions generates a dramatic change of the rating. The idea is to assign the rating according to the uncertainty/dispersion of the credit risk. We call this new approach a global rating, because it explores the whole input space when generating the global scenarios.
The global rating procedure is basically the same as the one used for the uncertainty analysis and global sensitivity analysis: 1) Identify the uncertain input parameters, their ranges and distributions; 2) Generate N global scenarios, i.e., N different settings of the inputs, from the input space; 3) For each global scenario generate a rating of each note; 4) Derive a rating of each note by a percentile mapping.

Methodology
The global approach derives the rating of a note from the empirical distribution of ratings generated from the global scenarios. An important fact is that this procedure is independent of which rating methodology is used to derive the rating of each global scenario, that is, if it is based on expected loss or probability of default.
We propose a global rating scale that reflects the dispersion of the credit risk of a tranche. In other words the global scale should not reflect a single rating but a range of possible credit risks, thus taking into account the uncertainties that affect the rating process.
assigned to a tranche if a predetermined fraction of the ratings generated using the uncertainty scenarios is better than or equal to a given underlying rating.
Hence, to set up a global rating scale we first have to decide on the underlying rating scale. Imagine we use Moody's. A proposal for the global rating scale A-E is provided in Table 7. The global rating B in Table 7, for example, indicates that a substantial fraction of the ratings generated under different scenarios fall in Moody's rating scale Aaa-Baa3. This informs the potential investor that the tranche shows low credit risk for certain scenarios but that there are scenarios where the credit risk is on a medium level.
Secondly, we have to choose the fraction of rating outcomes that should be laying in the credit risk range. As first attempt, we have defined the global scale with respect to the 80 th percentile of the local scale (in this case Moody's ratings). The mapping is shown in Figure 10. From the graph one can see that to assign a global rating B, for example, at least 80% of the ratings must be better than or equal to Baa3.
The idea to base ratings on percentiles is related to the Standard and Poor's, that is using a percentile approach for assigning ratings to CDOs. For further information, see (

Example
Using the percentiles of the ratings in Table 5 we can derive the global ratings of the three notes. The global ratings based on the rating scale provided in Table 7 for different rating percentiles are shown in Table 8.

Conclusions
In this paper, we have shown how global sensitivity analysis can be used to analyse the main sources of uncertainty in the ratings of asset-backed securities (ABSs). The global sensitivity analysis was applied to a test example consisting of a large homogeneous pool of assets backing three classes of notes (senior, mezzanine, and junior).
Due to the fact that deriving ratings for ABSs is computationally expensive, the elementary effect method was chosen for an initial analysis aiming at identify the non-influential input parameters. As a second step, variance based method wasapplied to quantify and to distribute the uncertainty of the outputs among the input parameters identified to be influential and to analyse their interactions.
The global sensitivity analysis led to the conclusion that the least influential inputs across all outputs are the recovery lag and the Logistic function's b parameter. Hence they could be fixed without affecting the variance of the outputs of interest and therefore the ratings to a great extent. The mean of the default distribution ( cd  ) was found to be the most influential input parameter among all inputs for all three notes.
For the mezzanine and the junior tranche the mean cumulative default rate, cd  , is clearly contributing the most to the variance, accounting for approximately more than 60% and more than 70% , respectively, of the total variance of the expected loss and the expected weighted average life of the tranches.
For the senior tranche, the first order indices indicated that cd  is the largest individual contributor to the variation in expected loss (17% ) and that c is the largest individual contributor to the variation in expected weighted average life ( 24% ). However, large parts of the variation in the outputs for the senior tranche came from interactions among input parameters. This indicates that the first order indices cannot solely be used to identify the most important inputs and more sophisticated sensitivity measures must be used.
In the final section, we propose a new rating approach called global rating. The global rating approach takes into account that the uncertainty in the input parameters propagates through the model and generates uncertainty in the outputs.
The global approach derives the rating of a note from the empirical distribution of ratings generated from uncertainty scenarios. Each scenario is a unique combination of values of the input parameters. An important fact is that this procedure is independent of which rating methodology is used to derive the rating of each global scenario, that is, if it is based on expected loss or probability of default.
The global rating scale is chosen to reflect the dispersion of the credit risk of a tranche. The idea is to let the global rating reflect a range of possible credit risks.
This scale is superimposed on a rating scale used by a rating agency or by a financial institution. The scale is based on a percentile mapping of the underlying rating scale, that is, a global rating is assigned to a tranche if a predetermined fraction of the ratings generated using the uncertainty scenarios is better than or equal to a given underlying rating.  Reserve account reimburs. 7) Class C interest 8) Class C principal 9) Class C additional returns