Today at work we were discussing the concept of Expected Shortfall as a risk measure and how it is computed. Expected shortfall (or ES), in the context of financial risk, is similar to Value at Risk (or VaR) except that one is usually asked to consider the average of a range of values rather than a single percentile, as is done for VaR. This makes the risk measure more robust as it requires perfect calculation of every value in the tail of the distribution whereas VaR essentially ignores those values.

In most cases a computer is calculating these values, usually about ten thousand or so for VaR on any given day, and so we tend to have a finite number of values to either take a percentile or to take an average.

But what if we have an infinite number of values?

In the case of an infinity of numbers, the average becomes a (Riemann) integral. See my previous article on this concept here. Hence, the expected shortfall of a range of values becomes an integral. Since those values can each be considered as an infinite number of VaR numbers at some percentile (just separated by an infinitesimal percentile), we can say that the expected shortfall is the integral of \textup{VaR}_{\gamma} over the range 0 < \gamma < \alpha with respect to an infinitesimal change in \gamma – the standard definition of an integral:

\displaystyle \textup{ES}_{\alpha} := \frac{1}{\alpha} \int_0^{\alpha}\textup{VaR}_{\gamma}(X)\textup{d}\gamma \approx \frac{1}{n}\sum_{i=0}^n \textup{VaR}_i(X)

Where the Value-at-Risk is defined as

\displaystyle \textup{VaR}_{\gamma}(X) := -\textup{inf}\{ x \in \mathbb{R}\,:\, F_X(x) > \alpha \} = F_{-X}^{-1}(1-\alpha)

which in words says:

The \alpha -percentile VaR is defined as the smallest real value x such that the probability of exceeding x is less that (1-\alpha).

Alternatively, we could define the probability distribution of losses as Y:= -X and then VaR is just the (1-\alpha)-quantile of the loss distribution Y.

Where does the Laplace transform fit in to this?

Well, we could just define the Laplace transform to be precisely the expectation of X like this:

\displaystyle \mathcal{L}[f](s) := \mathbb{E}\left[e^{-sX}\right]

but this doesn’t explain much.

So let’s go back one step and take a look at the Expected Value a little more closely.

In our case, which is probability theory, the expected value of a random variable X is defined, from first principles, as

\displaystyle \mathbb{E}[X] := \sum_{i=1}^k x_i p_i = x_1 p_1 + x_2 p_2 + \cdots + x_k p_k

We’ve already discussed that when we have an infinite of these numbers, i.e. when k\longrightarrow\infty then this summation becomes an integral. However, now that we are discussing random variables and not numbers, this integral must be a Lebesgue integral. The reason for this is that the random variable X is an element of a sigma-algebra \mathfrak{F} of events of some sample space \Omega and does not necessarily come to us as real numbers. Therefore we need a probability measure \mathbb{P} to assign real numbers to each measurable event in order to evaluate an integral.

Hence, our requirements to construct and evaluate such an integral (expectation) of random variables is a

  • Sample space \Omega ,
  • Sigma-algebra of events \mathfrak{F} , and a
  • Probability measure \mathbb{P}

collectively known as a probability space (\Omega, \mathfrak{F}, \mathbb{P}) .

Once we have this we can easily define the expectation of a random variable as

\displaystyle \mathbb{E}^{\mathbb{P}}[X] := \int_{\mathbb{R}} xf(x)\textup{d}x

Which is very similar to the definition above involving the summation.

Now all we need to do is consider taking the expectation of the exponential of the random variable, i.e. what is \mathbb{E}\left[ e^{-X}\right]?

Well, this is very similar to the moment generating function (MGF) of X, which is defined as

M_X (s) := \mathbb{E}\left[e^{sX} \right],\quad\quad\forall\, s \in \mathbb{R}

Note the positive exponential.

The key benefit of the MGF is that you can Taylor expand it as

\displaystyle M_X(s) := \mathbb{E}\left[ e^{sX}\right] = 1 + s\mathbb{E}[X] + \frac{s^2}{2!}\mathbb{E}[X^2] + \frac{s^3}{3!}\mathbb{E}[X^3] + \cdots + \frac{s^n}{n!}\mathbb{E}[X^n] + \cdots

from which you can define the random variable’s moments. However, the drawback is that this infinite series may not converge since the infinite integral of a positive exponential can indeed blow up.

Therefore, we change the variable s \longrightarrow -s and now we have a negative (damped) exponential which, by definition, is bounded  0 \leq e^{-sx} \leq 1 and so any infinite integral, indeed this one:

\displaystyle \int_{\mathbb{R}} e^{-sx} f(x)\textup{d}x

always converges. We define the above integral as the Laplace transform of X.

Starting with the basic definition of an expectation we can extend the definition to random variables via the introduction of a Lebesgue integral (which carries with it the requirements of a probability space). The expectation is therefore an infinite integral. We can apply an exponential transformation X \longrightarrow e^{sX} to be able to write out the random variable as a Taylor expansion giving us its moments. We then perform another change of variable s \longrightarrow -s to ensure that an infinite integral is bounded and converges. These two transformations, X \longrightarrow e^{-sX} provides the definition of a Laplace transformation:

\displaystyle \mathcal{L}[f](s) := \mathbb{E}^{\mathbb{P}}\left[e^{-sX}\right] = \int_{\mathbb{R}} e^{-sx} f_X(x)\textup{d}x

This definition requires

  • A probability space (\Omega,\mathfrak{F},\mathbb{P}) ,
  • particularly because the probability measure \mathbb{P} allows us to form the Lebesgue integral,
  • A probability density f_X of the random variable must exist.

Therefore, given a random variable X and its associated probability density (PDF) f_X(x) , we always have the Laplace transform of that density \mathcal{L}[f_X](s) defined at s . But note that this also requires a transformation of the parameter x in to -s.

Why would we want this?

The power of the Laplace transform in this setting is in its ability to produce the cumulative distribution function (CDF) F_X(x) of X . The CDF of a random variable is often much more useful in practical applications but is often difficult to find.

The Laplace transform finds the CDF in the transformed variable s easily:

\displaystyle F_X(x) = \mathcal{L}^{-1}\left[ \frac{1}{s} \mathbb{E}^{\mathbb{P}}\left[ e^{-sX} \right] \right](x)

Now going back to risk measures (and the topic of this discussion) the expected shortfall is defined as

\displaystyle \textup{ES}_{\alpha}(X) := \mathbb{E}^{\mathbb{P}}[ -X\,:\, X \leq \textup{VaR}_{\alpha}[X]] = -\frac{1}{\alpha}\int_0^{\alpha} \textup{VaR}_{\gamma}(X)\textup{d}\gamma

The right-hand side of this expression can be re-written in terms of the probability density:

\displaystyle \textup{ES}_{\alpha}(X) := -\frac{1}{\alpha} \int_{-\infty}^{-\textup{VaR}_{\alpha}(X)} xf_X(x)\textup{d}x

Practical Application

Often one has a portfolio loss function L , usually defined as the product of some exposure against the probability of loss occurring, but this detail is not important. The loss function also has a probability distribution (which can be numerically approximated using Monte Carlo methods, like simulating losses over and over again until a distribution is built up). Given this loss distribution one immediately defines the Value at Risk as

\displaystyle \textup{VaR}_{\alpha} := \textup{inf}\left\{ \textup{loss} \in \mathbb{R} \,:\, F(\textup{loss}) \geq \alpha \right\}

Further, the loss density L also has a cumulative density function (CDF) F which we would very much like to know, mostly because it is required in the definition of Value-at-Risk (see above equation on the right hand side).

Oftentimes the loss density is discrete and only takes on a finite number of values (usually as a result of a Monte Carlo) and so the CDF (which we want) is often some ugly discontinuous function.

The CDF can be approximated by the Laplace transform of the loss function:

\displaystyle \mathbb{E}^{\mathbb{P}}\left[ e^{-sL} \right] = \int_0^{\infty} e^{-sx} f_L(x) \textup{d}x = \mathcal{L}[L](x)

Summary

Thus, knowledge of the loss function can result in knowledge of the CDF via a Laplace transform. The CDF gives us all the Value-at-Risk numbers that we need which, in turn, provides us with the Expected Shortfall number.

Expected shortfall requires calculation of VaR at all loss thresholds. VaR requires knowledge of the cumulative distribution function of the loss density. One obtains the CDF by taking the Laplace transform of the loss density.

References

[1] J. J. Masdemont & L. Ortiz-Gracia, Haar wavelets-based approach for quantifying credit portfolio losses, arXiv:0904.4620, q-fin.RM.