Pricing Model for Financial Guaranty Products Using Actuarial Methodology and Most Prudent Principle

Considering the feasibility in China’s market and adopting existing pricing models on financial guaranty products and credit derivatives, a pricing model on financial guaranty products is built under the actuarial methodology. In the model, the geometric Brownian motion with correlation is chosen as the default process, the most prudent principle is applied to the calibration of the model and the Beta distribution is proposed to approximate the loss given default (LGD).


Introduction
With the strong support of the country, credit guaranty companies funded by the government or other nonprofit organization appeared in China. Guaranty industry has undergone a rapid development. At present, China's credit guaranty industry has begun to take shape. However, traditional credit guaranty business is not enough to constitute a complete financial guaranty system. In Chinese market, even credit guaranty products with the easiest structure and trading mechanisms do not fully develop to meet the needs of social development. In recent years, the country has paid more attention to the financing channel of medium and small-sized enterprises. In this background, in-depth study of the financial guaranty products has more important significance for the future development of China's financial industry.

Credit Derivatives Models Suitable for Financial Guaranty Products
As an important theory branches, structural model can be applied to the credit derivatives and financial guaranty products, which regards fluctuations of enterprise value over time as a geometric Brownian motion, assumes corporate defaults as enterprise value below a certain critical value, and uses the value of put option to describe the value of credit risk transfer.
Many scholars recognized Merton (1976) as the earliest literature discussing structural model. After that, a large number of studies on credit products are conducted in basis of Merton (1976). While Merton model assumes default occurs only at the maturity date, and does not consider the case when enterprise value drops below default boundaries before the assessment date. So the internal logic of the model conflicts to the actual situation, which is the drawback of Merton model.
To overcome the deficiency of Merton model, Black and Cox (1976) put forward first passive model, which still assumes the enterprise value follows a geometric Brownian motion. During the period of existence, once the value of credit assets drop below the pre-set threshold value of default, the assets initiate the liquidation process of default. Such improvement makes the model closer to the actual situation, while brings estimation problem of LGD. structure model and affine intensity model. Term structure model introduces trend that default intensity changes over time, while affine intensity model connects default intensity to fluctuations process of corporate assets value, considering the circumstance that the fluctuations parameters of assets value change over time.

The Application of Actuarial Loss Model in the Pricing of Financial Guaranty Products
One important source of loss model for credit products, is credit derivatives pricing models. Once eliminating the reliance on pricing theory of risk-neutral, a large part of credit derivatives models can be used to estimate the considerable losses frequency. In the spirit of structural model, Gordy (2003) used the actual probability of default to adjust the value of credit risk portfolio. He pointed out current mainstream of risk management models can be deduced by the method, including the Credit Metrics software produced by Risk Metrics company, Credit Risk+ software produced by Credit Suisse Company, Credit Portfolio View produced by McKinsey and KMV models.
Scholars actively carry out empirical research on these business models and provide suggestions for the model improvement of companies such as Risk Metrics, Credit Suisse, McKinney, and KMV. For example, Passalacqua (2007) empirically studied Credit Risk+ model of Credit Suisse Company and used a Beta distribution in the modeling of default extent.
After reviewing literature related to quantitative analysis on credit derivatives, it is not difficult to find that, research work on credit derivatives concentrates on the study of the default occurrence and default frequency. Rare research pays attention to the actual losses after the default occurrence. So far, common models for LGD mainly relate to estimation of constant recovery and LGD naturally occurring in the fluctuate process of enterprise value. In general, various types of credit derivatives pricing model has been greatly enriched and expanded. In a number of credit derivatives models, many models and ideas can be applied to the field of financial guaranty products. To consider the issue in the risk-neutral measure is a major common feature of quantitative analysis literature on credit derivatives, in other words, structural model basically applies no arbitrage assumption to fix prices. Only base on measure alternation and hedge and replication strategies implicated in models, users can properly apply the model to develop practical trading strategies. credit asset, denoted as q . They assume that single term PD assets in different credit rating (grade A, B, C) satisfies The grade with the highest credit-worthiness is denoted by A, the grade with the lowest creditworthiness is denoted by C. Neither in A nor in B nor in C any defaults occurred during the last observation period. Estimate of PD only takes reference of information in this cycle and is not affected by other information.
Apparently, the range of q takes the interval   0,1 . According to an intuitive, it is easy to see, just q meets the order relation and consists to size relationship of the real. The historical data sample is events that have occurred   N either in A nor in B nor in C any defaults occurred during the last observation period The obligors are distributed to rating grades A, B, and C, with frequencies A n , B n , and C n . And events of default are independent. We firstly consider the estimate of C q , obviously      1 P r C n C A ny defaults occurre n q d i C Thus, we set    1 C n C q as the test statistic in equation (1). It is easy to see that the most prudent estimate is Then, we consider the estimate of B q . Of course, the estimate of B q can be same with that in equation (5). However, this way can not reflect the feature that "credit rating in grade B is higher than grade C". Therefore, we need to consider credit assets in grade B, grade C together: A ny defaults occurred neither in B nor C q in q At this time, if take results of equation (5) into the right side of equation (6) as the test statistic, then inevitably get , which is unreasonable. Therefore, we consider appropriately adjust the test statistic to relax the restrictions. Noticed as the limits of the statistics, then the most prudent estimate of B q is Similarly, the most prudent estimate of A q is If there are more credit ratings, we get their most prudent estimate in the same way.
If instead of no default, the known information discloses few defaults. We can still get the most prudent estimates in a similar way. This issue will be illuminated later on.
The intuitive meaning of equation (5)(7)(8) is: a high credit rating is more difficult to breach the contract than the low credit rating. Assets in the high credit rating are likely to be reduced to a low credit rating firstly, and then gradually slump towards default. Therefore the fact any defaults occurred in a low credit rating should be fully considered in the PD estimate process of high credit rating, which plays a "supporting role" on credit status of high credit rating.

The Introduction of Relevance and the Modeling of Assets Value Process
For more complex cases, it is easy to see the from above derivation, if a default event of credit assets has good independence -namely defaults between different credit assets are independent. If the default likelihood of same credit asset in different periods is independent, then we only need to consider two events-the individual credit asset default and non default, which means asset PD is estimated in the most prudent principle. The specific expression is similar to (5)(7)(8).
However, the relativity of credit assets is the indispensable factor in the credit product research. It is difficult to describe the correlation only depending on a single term PD. To overcome this problem, we must introduce appropriate stochastic process to describe the process when defaults occur. This paper follows ideas of Gordy (2003) and similar literature, describing a default as corporate assets reach the threshold of a default event for the first time.
Based on the original model, we put forward following hypothesis: (Ⅰ) Asset fluctuation process of each enterprise may be unobservable. Models of this article does not need to actually define the asset fluctuation process and default threshold values for each enterprise, just assume that there is an inherent stochastic process for each enterprise, and the value of the process determines whether a company breaks a contract.
(Ⅱ) On the assumption that the value fluctuation process of enterprise assets follows the Brownian motion.
(Ⅲ) To simplify the calculation, the authors restrict the default only occurs at the end of each cycle. Therefore, as to the corresponding Brownian motion of assets value, we only need to consider corresponding increment of each cycle. This increment follows normal distribution.
From the beginning of debt credit ratings produced to now, the number of all credit assets that ever existed and currently exist in the market is N . We denote the existence period of credit asset n as     , 1 , 2, , t n T n n N L .
Considering the asset fluctuation process on discrete time points, this paper supposes     , t n T n N . Namely, asset values on each discrete time points are Similar to Pluto and Tasche (2006), , t n V follows the standard normal distribution. The correlation of different assets at the same time point is where corr represents the correlation coefficient of two random variables.
To construct this asset value process, we introduce the stochastic process influencing the value of each asset, which is called system factor, denoted as the equation (9) is established.
According to Pluto and Tasche's (2006) ideas, system factors on different time points have a certain correlation, which exponentially diminishes with the increases of time interval. This correlation is expressed as Construction methods which satisfy system factors fluctuations process (11)  The sequence of random variables    (10) and (12) (Ⅱ)   t S and   , n t V follow the standard normal distribution and satisfied the requirements of equations (9) and (11).
Prove: firstly to prove equation by mathematical induction (13). When  0,1 t , it is easy to make the equation (13) true from the definition of equation (12). For  1 l , the equation (13) is true when  t l, namely From equation (12), we obtain Namely equation (13) is true when  1 t l . Therefore equation (13) is proved. Take equation (13) into equation (10) to obtain equation (14).
It is easy to see   t S and   , n t V are subject to normal distribution by the basic properties of its random variables.
From equation (13) and its independence, the expectation and variance of t S are respectively: is a random variable sequence of standard normal distribution.
Further, from the independence between    , n t and   t w ,    , n t and 0 S , and equation (13), we know that    , n t and   t S are independent. Then from equation (10) and    , n t follows standard normal distribution, we obtain Then equation (11) is proved, namely that all the conclusions of Proposition 1 are true.
According to the above hypothesis and conclusions of Proposition 1, the model in this paper has depicted the credit assets value process and the relevance. We will consider the modeling of default event in the following parts of this paper.

Modeling of Default Event
In previous section, we assume that the total number of all credit assets ever existed or still exists currently in the market is N . Considering the total number of credit assets and the total number of each credit rating are changing over time, in order to describe the contents of this section, we need to introduce some of the following marks.
Credit ratings are degradedly denoted as 0,1,2, ,K L , where 0 indicates inexistence (including unissued or matured situations), so it has no probability of default; 1 indicates the highest credit rating (like AAA); K indicates default.
In normal circumstances, the credit status of assets changes in some point of the year, but not necessarily in the end or early of the year. In this case, we make an approximation: for assets locating in a credit rating less than one year, their time proportions at the level in the year are recorded in the assets number of credit levels. The specific calculation method is given by the following Definitions 2.
Definition 2: The time bucket of asset n in the rating k is     . For a given year t , the number of assets in rating k contributed by the asset n in the year t is and the time-weighted number of assets in rating k in the year t is Although     k N t in such definition represents the number of assets, but be not necessarily an integer, for example, a credit asset 0 n before July 1, 2000 located in rating 1, and was reduced to rating 2 by rating agencies on July 1, 2000. In 2000, the asset 0 n separately locates in rating 1 and in rating 2 for half a year, thus asset 0 n increases the number of credit assets in rating 1 and rating 2 in 2000 by 0.5. Based on the fluctuation process of asset values, we use a given credit rating k and the probability of default k p , which is the PD of a single asset in the rating, to build the default threshold value model For any asset n in the rating k , when , that is, corporate assets cannot pay off its debts, the asset   ,n k V D defaults and so does the company. From then on, the company enters a follow-up process instead of following the original asset value process, where  is a standard normal distribution function. In consistent with the actual situation, Because value processes of different assets have a correlation, we can introduce the most prudent principle into the valuation process in the PD model.

The Deduction of No PD and the Most Prudent Estimation
In order to use the most prudent principle for valuation, we have to deduce the probability of no default (or only a small amount of default records, such as case with 1 default record, 2 default records) in credit assets.
Proposition 2: If the credit rating of the asset n in the year t is k , and   0 k K , the probability that the credit asset n has no default in the year t under the condition of given factors is no asset in credit level no m ore than G S k default in the year t p % (24) We obtain Next, we use equation (25), which presents the probability measure, to integrate equation (23) and we get the right expression of equation (24), which is the expression of the unconditional upper bound estimate of no default PD.
1 e x p 1 2 ; , Where the subscript 0 of 0 P denotes the historical observations of default event is zero.
And we can get the most prudent estimate of the probability of default k p .

The Most Prudent Estimation in Few Defaults
We set the number of historical defaults to d (   0 d N , usually d N = ) and consider when using the most prudent principle to valuate.  value. Because this expression form is complicated and difficult to write a unified formula, and the practical problems we are studying do not require a higher d , so the formula of higher d is not derived any more in this paper. Similar to equation (27) we can get the most prudent estimate.

Loss Given Default Model and Risk Cost Pricing
The extent of loss given default is generally replaced by its expectation. We use the idea of Passalacqua (2007) to make the model more sophisticated. We apply Beta distribution as the model of default loss extent. We assumes under the conditions when loss given default occurs, the percentage of losses extent to guarantee amount follows    The density function of Beta distribution has various shapes. We can choose the appropriate parameters combination according to the actual situation. The problems Beta distribution may encounter in the pricing process is that: with less default data, it is difficult to find a strong basis to determine the portfolio   , a b . However, compared to a single point assumption, Beta distribution of loss given defaults is more scientific. Based on existing model and parameters estimation of PD and loss given default, we apply Monte Carlo simulation to generate fluctuations process of assets value defined by formulas such as (10)(12). We determine default event in each random orbit according to threshold value of default, Then, based on the value of loss given default, we comprehensively consider the results of various tracks, and use average value of loss or value at risk, to determine the risk cost of credit portfolio, and draw corresponding price of guarantee products.

Conclusion
According to the characteristics of financial guaranty products, we use the ideas of actuarial method to determine their risk cost. The most prudent principle is applied to the most critical part-the estimation of PD. At present, existing research on the most prudent estimation stops in the stage of measure PD. In this paper, this idea is applied to the pricing of guaranty products, cooperating with the loss extent distribution of guaranty product prices.