Modelling the rating process and rating validation

Project Details

Description

It is the main objective of this project to thoroughly formulate a statistical
framework for the rating process in a risk-based lending environment and show
its consequences for validation techniques. One of the most important features
of our framework is modeling the intertemporal information structure, or, in
probabilistic terms, the filtration of the probability space. Traditional models
assume that at the observation date the information about the realization of
the default indicator (i.e., whether a borrower is creditworthy or not, or, if a
borrower will default or not during a given period) is available in general, though
not always accessible to all agents. In the canonical case information about the
default indicator is costly, but (at least partially) available (see e.g. Ruckes
(2004)). This assumption is reflected in popular validation measures, like the
Accuracy Ratio or Gini Coefficient, where the discriminatory power of rating
systems is benchmarked against a “perfect” rating system which has perfect exante
knowledge about the realizations of the default indicator. In our framework
we assume that the information about the default indicator is revealed after
the observation date, either continuously over time or at the end of a certain
time period. In such a framework, even the optimal processing of the available
information cannot produce perfect forecasts of the default indicator. Hence,
a perfect rating system is a rating system which produces optimal (in a sense
which has to be defined later) estimates of the expected value of the default
indicator, i.e., the PD. Therefore, any validation method has to focus on a
rating model’s ability to provide optimal PD estimates (calibration quality, see
Bank for International Settlements (2005)).
StatusFinished
Effective start/end date1/11/0728/02/09