## [PDS] Probability in Finance – Key Ideas: IV

Having defined random variables using the measure-theoretic language, to complete the basic set-up, we can now define familiar things like ‘expectation/expected value’ and ‘variance’ of a random variable. Expected values are understood as weighted averages, or simply sums or integrals, and in the world of probability, it turns out we need a specific kind of integral, called the Lebesgue integral.

…

*Measurable Functions can be Integrated*

Like Riemann integrals, the intuitive way to understand Lebesgue integrals is to think of them as ‘area under a curve’. The way Lebesgue integral differs from its Riemann counterpart that it calculates area by dividing along the range of the function. Recall that Riemann integral works by taking limits of the ‘lower sum’ and the ‘upper sum’, where the lower and upper sums are calculated as sum of the area of rectangles formed by considering intervals along the domain (the x-axis). The following pictures borrowed from shows the difference:

*[Source: Steven Shreve, Stochastic Calculus in Finance, Vol II, Chapter 1; Click to zoom]*

Extending the intuition from the Riemann integral then allows us to write area under the curve taking intervals along the range (y-axis) as:

where for some . Note that the above Lebesgue sum is defined iff one can talk about meaningfully – that is one can ‘measure’ the inverse image of the function .

More formally, then, one can write the area under the curve in the Lebesgue sense iff inverse image of is measurable. It is in this sense that Lebesgue integrals are defined.

The need for Lebesgue integral arises when finding things like ‘expectation’ and ‘variance’. Finding expectation or expected value involves summing over values a random variable takes weighted by probability. Now recall that probability is defined for events in the sample space, but random variables are function defined on sample space. So find this sum is like integration of a function, i.e. values taken by the random variable (y-axis) over probabilities (measure) defined on events in the sample space, i.e. -field (x-axis).

So the requirement that measurable functions is a natural requirement when talking about random variables. We can find probabilities (measure) of only those value of the random variable which \textit{can} happen, i.e. belong to the -field generated by the sample space. Also, note that argued this way it is clear (why?) that there is no obvious way to partition the x-axis ala Riemann (probabilities of events in the sample space corresponding to the values taken by the random variable), and the only way one can integrate random variables is by starting on the y-axis (values taken by the random variable).

A formal definition of Lebesgue integral is more than what we need at this stage, so with the intuition in place we can now move to defining expected value.

…

*Expected Value as Lebesgue Integrals*

*Expected Value: *Given a random variable on the probability space the expected value is defined as:

and it can be shown that it is equivalent to our familiar notion:

and if is continuous this changes to the familiar formula:

where is the probability distribution and is the probability density function associated with the random variable .

At this stage a natural question is how do we compute Lebesgue integrals in practice. Well, as it turns out for most ‘nice’ and ‘well-defined’ functions, value of a Lebesgue integral is same as that obtained by finding he integral the Riemann way (relieved?). So for most practical purposes nothing needs to change as far as our intuitive notion of expected value is concerned.

## Leave a Reply