Week 3: The Value-at-Risk > Lesson 1: Introducing Value-at-Risk > Video Lesson
- The VaR is a measure that tries to answer a simple but significant question: How bad can things get, in terms of losses, when we invest, we lend money, and so on? In more probabilistic terms, we look for a measure that tells us: With probability alpha we will not lose more than V euros in time T. The quantity V is the VaR.
- The VaR can be computed using two different distributions: the distribution of gains, where a loss is a negative gain.
- Using some more formality, the VaR is nothing but a quantile of the loss distribution, and in particular the alpha-quantile for which the probability of observing a larger loss, given the available information, is equal to 1-alpha.
- Given a loss distribution, the 90% VaR is the threshold loss for which 90% of losses are smaller and only 10% are larger.
- Naturally, always with respect to the loss distribution under scrutiny.
- As you can imagine this makes no difference for continuous loss distributions, as typically assumed in credit risk models.
- The quantity capital F is the cumulative distribution function of losses.
- If you are not familiar with this terminology and concepts, such as for example cumulative distribution and quantile, please refer to the prerequisites of this course.
- For the moment, VaR and C-VaR are just synonyms for us.
- The VaR plays a major role in market and operational risks too.
- The Value-at-Risk essentially depends on 2 elements: the loss distribution, and the alpha value.
- A loss distribution is always expressed over a time horizon T and it can be empirical or theoretical.
- In the first case, it is the so-called historical distribution, that is the distribution that emerges from the observation of reality, when we collect data about historical losses.
- In the second case, it can be whatever distribution and it is essentially used for modeling purposes.
- Since we have no additional information, we may assume that this distribution is continuous.
- What is the 90% VaR? In other words, what is the threshold loss such that the probability of observing a larger loss in only 10%? Clearly, given the uniform distribution of losses between -80 and 20, this quantity is 10 million euros.
Week 3: The Value-at-Risk > Lesson 2: Special VaRs and the Expected Shortfall > Video Lesson
- Today we will start with an exercise, because it is very important that you are familiar with the computation of the VaR.
- The question is: what is the VaR at alpha level 0.98so the 98% VaR? And what happens if alpha is 0.99? To solve this type of exercises, it may be convenient to write all the data we have in table format.
- Our table starts with three columns: losses, probability as a percentage and probability as a decimal.
- To every loss, we can associate the corresponding probability, as a percentage and as a decimal.
- We then add a forth column, containing the cumulative probabilities as decimals: 0.94, 0.97, 0.99 and 1.
- Now, the first question asks us to find the 1-year 0.98 VaR.
- For the 0.99 VaR we do the same, and 0.99 is already in our plot.
- What’s the VaR? Here we can use a convention.
- In situations like this, the VaR corresponds to the average loss in the segment.
- Another type of VaR, which is often used is the so-called mean-VaR, that is to say, the VaR centered around the mean of the loss distribution.
- It is simply given by the difference between the VaR and the mean mu.
- If we assume a specific distribution for the loss distribution, we can obtain special formulas for the VaR.
- For a Gaussian distribution, the VaR is simply computable using the quantile function of a standard Gaussian, that is to say a Normal(0,1).
- The VaR alpha for a normal distribution, with mean mu and standard deviation sigma, is equal to mu plus sigma times the alpha quantile of a standard Gaussian.
- Please notice the notation: “var” in small letters is the variance and not the Value-at-Risk! And now, another exercise.
- Now, the question is: what are the 95% and 98% VaRs? To solve this exercise we can use the standard normal tables or R. The 0.95 quantile of a standard Gaussian is 1.6648.
- Now assume that things go bad, and that we can observe a loss which is greater than our VaR alpha.
- From a statistical point of view, the expected shortfall at level alpha is a sort of mean excess function, i.e. the average value of all the values exceeding a special threshold, the VaR alpha! Why is the expected shortfall important? Simple: two loss distributions may have the same value of VaR alpha, but two different expected shortfalls.
- What is the expected shortfall for alpha equal to 0.95? And 0.99? By the way, what is the 95% VaR in this case? Can you see it immediately? To compute the expected shortfall, the trick is to sort the losses as usual.
- Then we start from the bottom, i.e. from the largest losses and we go backward, summing the corresponding probabilities, until we reach the 1-alpha level.
- If alpha is 95% then 1-alpha is 5%. Look at the table: the 25 million loss has a chance of 0.5%. Is this equal to 5%? No. Hence we sum the probability of the 20 million loss, getting 3%. Is this equal to 5%? Not yet, therefore we also sum the probability of the 12 million loss.
- Summing the there largest losses the total probability is 5%. The 95% expected shortfall is nothing more than the weighted average of these three losses, where the weights are their occurrence probabilities, and the denominator is 5%. Our expected shortfall is therefore 17.3 million.
Week 3: The Value-at-Risk > Lesson 2: Special VaRs and the Expected Shortfall > Quickly computing the VaR with R
- Assume that our loss distribution follows a Normal or Gaussian with mean 0, and standard deviation 1.
- 95,0,1), where 0.95 is the alpha level of the VaR, and 0 and 1 are respectively the mean and the standard deviation of our normal distribution.
- In the case of a standard Gaussian, with mean 0 and standard deviation 1, the information about the mean and the standard deviation can be omitted.
- For all the other normal distributions, the information about the mean and the standard deviation is essential and cannot be ignored! So, here we can see some examples.
- Just click on “Import Dataset” and choose the location of your file, which can be local or online.
- When we import a dataset, we can select different options, such as the name we want to assign to the data in R Studio.
- We can also tell R Studio that there is a header, which kind of separator is used in the data, and so on.
- Once we have imported the data, we can have a look at them in the inspector and we can plot them.
- What is the 95% VaR for this loss distribution? Here we can use the quantile function, which computes empirical quantiles.
- We type quantile and then we provide the name of the data we are using and the alpha level in decimals.
- Just consider our data, we have a total of 1504 data points.
- If we want to select the nearest existing data point, in statistical terms the closest order statistics, which satisfies the alpha level we have chosen, we can use the option type and select the number 3.
- Is our VaR correct? When we work with actual data, it is almost impossible to obtain exact values, we usually obtain approximations.
- 76 observations correspond to 5.05% of all the observations we have in our data sets, that is 1504.
- In this case you can import your data by using many different commands.
Week 3: The Value-at-Risk > Lesson 3: Coherent Measures of Risk and Back-testing > Video Lesson
- In this class we will discuss one of the main drawbacks of Value-at-Risk, that is to say the fact that the VaR, as a measure of risk, is often not coherent.
- What does that mean? In general terms, a risk measure is a function that it is used to quantify risk.
- Do you remember what we have said during week 1? From a mathematical point of view, a measure of risk is a function phi that maps from the linear space of losses to the extended real line.
- A measure of risk is said coherent if it is monotone, sub-additive, positive homogeneous and translation invariant.
- Monotonicity means that, if Z1 and Z2 are two losses and Z1 is smaller than Z2, then the value of the risk measure in Z2 is greater than the value of the risk measure in Z1. Sub-additivity means that the risk measure assigned to a combination of losses is smaller than the sum of the risk measures assigned to the single losses.
- A mathematical one, in which adding a constant to a loss does not change its risk profile; and a second one, more economical, in which adding a scalar to a loss reduces risk by the same quantity.
- If one of these properties is not respected, a measure of risk is not coherent.
- We can also assume the risk measure to be normalized, even if this is not strictly necessary for a risk measure to be coherent.
- What are the economic interpretations of these properties? Monotonicity tells us that it is always possible to have an ordering of losses and of their risk profiles.
- Sub-additivity guarantees that our coherent risk measure is in favor of diversification of risk.
- The risk of a portfolio of loans or securities should be smaller than the simple sum of single risks.
- Positive homogeneity simply tells us that risk is proportional: if your portfolio may lose 10 dollars, and you double the amount of money in that portfolio, you may expect a loss of at least 20 dollars.
- In the short run, we can assume money to be risk free, that is we can ignore inflation.
- Now, the overall risk of your portfolio decreases, since it already contains some liquidity that can be used to hedge risk.
- Let us now consider an exercise, to show that the VaR is not always coherent.
- In particular we show that the VaR is not generally sub-additive, hence it appears to be against diversification.
- The 97.5% VaR is 1 million for each investment.
- What is the joint VaR of a portfolio made of the two investments? Since we are assuming independence, computations are rather simple.
- The 97.5% VaR of the portfolio is obtained by drawing the usual horizontal line passing through 0.975.
- Now notice the following: the joint VaR is 11 million pounds.
- In general, we can say that the ES is always coherent, while the VaR is not.
- There are special cases in which the VaR is coherent, since it shows to be sub-additive.
- If the loss distribution is a Gaussian distribution, it is not difficult to show that the VaR is coherent.
- Back-testing is a validation procedure often used in risk management.
- The idea is simply to verify the performances of the chosen risk measure on historical data.
- The idea is to count the number of days in the past in which the actual loss was larger than our 99% VaR.
- If exceptions are say 5%, then we are probably underestimating the actual VaR.
- Naturally it may also happen that we overestimate VaR! Standard back-testing is based on Bernoulli trials generating binomial random variables.
- If our VaR alpha is accurate, the probability p of observing an exception is 1-alpha.
- If this probability is smaller than the chosen significance level, we reject the null hypothesis that our VaR is ok, thus rejecting the VaR.
- If the probability is greater than the significance level, the VaR appears to be ok.
- We want to back-test our VaR using 900 days of data.
- Should we reject our VaR? What happens with 20 exceptions? We will solve this exercise using R. So…let’s go.
- Since we are interested in a right tail probability, given that we are interested in seeing whether our VaR is underestimated, we can simply type 1-pbinom(11,900,0.
- Since 0.1960 is greater than 0.05, we cannot reject the null hypothesis, and this means that our VaR is correct.
- This quantity is smaller than 0.05, hence we reject the null hypothesis: our VaR is underestimating risk.
Week 3: The Value-at-Risk > Summary > Video
- The VaR is an omnipresent measure of risk in risk management.
- In the field of credit risk, it is often called C-VaR, or credit risk VaR.
- From a purely statistical point of view, we have said that the VaR is nothing more than a quantile.
- A VaR at confidence level alpha, is therefore the alpha quantile of the loss distribution.
- Here we are, to define the VaR we need a loss distribution.
- The VaR alpha is simply the threshold loss above which the probability of observing a larger loss is 1-alpha.
- It is very important that you understand how to compute the VaR.
- We have then considered derivations of the VaR, such as the mean-VaR, that is to say, the VaR centered around the mean.
- Or specific VaRs, for distributions like the Gaussian and the student-t. We have also introduced the expected shortfall, a measure of risk that quantifies the average loss over the VaR.
- It can be shown that while the expected shortfall is always coherent, the VaR is not.
- Back-testing is the set of statistical tests we can use to verify whether our Value-at-Risk, our VaR, is reliable or not, on the basis of historical data.
Week 3: The Value-at-Risk > The Sofa > Discussing about VaR
- During the last week, we have considered the VaR, some of its derivations, and another measure which is strictly related to the VaR, that is to say the Expected Shortfall.
- NOw, under Basel II and Basel III, the Value-at-Risk is the queen measure for market risk, and surely a fundamental measure of risk for what concerns credit risk and operational risk.
- In the field of credit risk, the VaR is known as “the C-VaR”, the credit risk VaR.
- Now, even if the VaR is such an important measure in risk management, yet the VaR has been seriously criticized by experts; it has also been considered one of the causes of the 2008 crisis.
- Why? First of all, we have seen that the VaR is not – generally speaking – a coherent measure of risk.
- Do you remember why? The point is that we can show that the VaR fulfills most of the properties that a coherent measure of risk should have, but not sub-additivity, not in general at least.
- As we will see later on, during this course, when we will speak of the credit risk VaR, after introducing different models that we can use to produce a loss distribution in the case of credit risk, the VaR is a procyclical measure of risk.
- Why do we use the VaR?! If the VaR has so many weaknesses, what is the point in using such a measure.
- If we have a 95% VaR, that means that we essentially look for the threshold loss, according to which 95% of the losses we can expect are smaller, and only 5% of the losses we can expect are greater than that threshold loss.
- When we compute the VaR, that value, the threshold value, is perfectly ok under the loss distribution we are using, but if that loss distribution is not the real loss distribution, here comes the problem.
- Ok? So what is the end of the story about the VaR? For the VaR, which is a fundamental measure of risk that every risk manager must know and must be able to use…for what concerns the VaR, and for what concerns any measure of risk we may consider, we have to know how to compute it, how it works, but most of all what are its weaknesses, its points of weakness.
Week 3: The Value-at-Risk > The Sofa > Sofa Session
- The loans issued by a bank are usually on the bank’s book.
- If these loans are securitized and sold off as an investment, then these loans disappear from the bank’s balance sheet.
- This entity is still related to the original bank.
- The third example is the following: imagine I have a bank account with 1000 euros.
- That money, my 1000 euros are liabilities for my bank, and they will appear on the balance sheet of the bank.
- Now, if I transfer that money to a mutual fund, which is sponsored by the bank, that money will still be in the hands of my bank, but will disappear from the balance sheet as a liability.
- If that money is used to buy a stock, that stock will be in the hands of my bank, but it will not be a property of my bank.
- My money will appear again on the balance sheet of the bank, only if I decide to sell the stock and put the money back on the bank account.
- It’s the following: do you see any future threats or risks due to the A-IRB approach, which can cause a higher rate of unexpected losses? My answer is yes, as I already told you last week, in the last Sofa session, when speaking abut Basel 4.
- A-IRB approaches are very nice from a mathematical point of view, and banks are very happy to invest money there, because typically these approaches lead to lower capital requirements.
- The problem with these models is that one should be sure enough about their reliability, in order to minimize model risk, that is to say the risk of using a wrong model.
- Because the uncertainty related to model risk is surely a problem, if you think of the possible unexpected losses coming from a wrong model, being used to assess credit risk, or market risk, or any other type of risk.
- Rew asks if Basel and other rules have made credit too inaccessible for corporations and consumers.
- For a sector that is so important for the economy, like the credit sector, like the banking sector, I believe that regulations are fundamental, they are essential.
- The main drivers of the credit crunch were the usual ones that you can observe during crises, that is to say irrational fears, the excessive uncertainty and volatility on the markets, and some evident liquidity problems also for banks.
- The third point is the implementation, the full implementation, of the so-called Liquidity Coverage Ratio that will change the way in which banks deal with liquidity, and in particular with what we can call – and what is called – High-Quality Liquid Assets.
- Liquidity risk will be quite a problematic part of Basel III, I believe.
- For the last point, I think that a very important aspect, a very critical aspect of Basel III, when implementing all the regulations, will be operational risk.
- So operational risk will be a very nice field of research for those of you wanting and willing to enter in the mathematical modeling of risk.