Back of the Envelope

Observations on the Theory and Empirics of Mathematical Finance

[PDS] Probability in Finance – Key Ideas: III

with one comment

When we do elementary probability, one of the most common set-up used is the coin-toss game with the outcomes being \{H\} or \{T\}. While it remains one of the most useful thought experiments to think systematically about chance, with the abstract outcomes as “\{H\}” and “\{T\}“, there is not much one can do with that.

For example, if one were to toss the coin many times it would be good to get a sense of “expected outcome” and “variations” in outcome from the coin-toss game. But with the abstract sample space such as \{H\} and \{T\}, it is not possible to do so.

From elementary probability, however, we also know how to get around that. The way is to assign the abstract outcomes \{H\} and \{T\} some numbers. Say, whenever \{H\} comes, assign the number +1 to it, and whenever \{T\} comes assign the number -1 to it. This way, because the abstract outcomes have been converted to numbers, one can now do math with it and find things like “expectations” and “variance” of outcomes. And these are useful things to have, as they help to summarize more complex experiments/models.

Mathematically, one can think of assigning numbers to abstract outcomes as “carrying out a function” – that of mapping abstract outcomes to “real” numbers. It turns out there is a name for this kind of “function”. Mathematicians call it random variable. (Yes, “variable” is perhaps not the the best word for “carrying out a function”, but that’s how it is for historical reasons, and we have to live with it.)

In the world of Lebesgue measure that we have been considering, it turns out random variables in probability are just an example of what are called measurable functions .

Random Variables

If the sample space is known, knowing all possible numbers associated with an experiment (random variables) is equivalent to knowing the \sigma-field, the converse, however, does not hold. That is, while assigning numbers to outcomes of experiments is useful, knowing just the random variables associated with an experiment is not the same as knowing the \sigma-field. Consider the following examples.

Example 1 (Coin-toss): The associated sample space and \sigma-field are respectively \Omega_1 = \{H, T\} and \mathbb{F}_1 = \{\Omega_1, \varnothing, \{H\}, \{T\}\}. Let random variable X_1 assign numbers to outcomes of a single coin toss game such that X_1(H) = 1 and X_1(T) = -1. Knowing the value of the random variable X_1 in this case is enough to tell us about everything about the underlying game.

Example 2 (Die-toss): The associated sample space is \Omega_2 = \{1, 2, 3, 4, 5, 6\} and the associated \sigma-field is \mathbb{F}_2 = \{\Omega_2, \varnothing, \{1\}, \{2\}, \{1, 2\}, \cdots, \{1, 3, 5\}. \{2, 4, 6\}, \cdots \}. Let random variable X_2 assign numbers to outcomes of the die-toss outcome such that if the outcome is odd numbered, the random variable assigns the value 1 to it and -1 otherwise. That is, the random variable X_2 is such that X_2(\{1, 3, 5\}) = 1 and X_2(\{2, 4, 6\}) = -1. Clearly, knowing the value of the random variable X_2 in this case is simply not enough to tell us about the underlying experiment, because there is no way to distinguish between, for example, outcomes \{1\} and \{3\}. The random variable is just too “coarse”.

Not only that, random variables X_1 and X_2 are indistinguishable from each other. If only random variables are reported there is no way to know if the underlying experiment is a coin-toss game or a die-toss game.

That said, in both examples, however, one thing is clear – values of the random variables must correspond to some elements in the \sigma-field. This is the idea behind “measurability” – that random variable values must correspond to “something that can happen” (English-speak for members of the \sigma-field).

Now we are ready to introduce the idea of random variables and measurability more formally.

Random Variables as Lebesgue-Measurable Functions

Measurable Functions: Definition

Given a measurable set A, a function f:A \rightarrow \mathbb{R} is said to be measurable if for any interval I \in \mathbb{R}:

f^{-1}(I) = \{x \in \mathbb{R}: f(x) \in I\} \in \mathbb{M}

That is, a function is measurable if it’s inverse image belongs to the collection of Lebesgue-measurable subsets of \mathbb{R}. Put simply, a measurable (“nice”) function is one which is obtained from a measurable (“nice”) set.

In probability, instead of \mathbb{M}, as mentioned earlier, typically we encounter Borel-measurable sets. So if f^{-1} \in \mathbb{B}, we call f as a Borel-measurable, or simply a Borel function.

Random Variables: Definition

Random variable is a Lebesgue-measurable function X:\Omega \rightarrow \mathbb{R} such that given a probability space \big(\Omega, \mathbb{F}, \mathbb{P} \big), and an interval I \in \mathbb{R}:

X^{-1}(I) = \{x \in \mathbb{R}: f(x) \in I\} \in \mathbb{F}

What this says is that random variable are obtained from sets that belong to the \sigma-field \mathbb{F} – or alternatively, values of a random variable are obtained by assigning numbers to “all possible things that may happen in a game” (English-speak for subsets/elements of \sigma-field \mathbb{F}) – of course, this is simply the act of assigning numbers to abstract outcomes as in the examples above. This definition formalizes this notion.

When there is no confusion about the underlying \sigma-field \mathbb{F}, Lebesgue-measurable functions are simply often referred to as measurable.

\sigma-field Generated by Random Variables

The fact that random variables can often be “coarse” (like assigning only odd/even numbers to outcomes of the die-toss) gives rise to the notion of \sigma-field associated with a random variable.

\sigma-field associated with a random variable is the collection of subsets that can be identified by the random variable. So for the random variable X_2 described above, \sigma-field generated by X_2 would be \sigma(X_2) = \{\Omega, \varnothing, \{1, 3, 5\}, \{2, 4, 6\}\} – that is, the random variable can only identify outcomes upto whether they are odd or even numbers.

We can make this idea more formal by technically defining the notion of \sigma-field generated by a random variable.

\sigma-field Generated by a Random Variable: Definition

Given a probability space (\Omega, \mathbb{F}, \mathbb{P}) and a random variable X:\Omega \rightarrow \mathbb{R}, the family of sets such that for some A \in \mathbb{B}:

X^{-1}(\mathbb{B})=\{X^{-1}(A) \subseteq \mathbb{F}\} \subseteq \mathbb{F}

is a \sigma-field; where \mathbb{B} is a Borel field.

In English-speak \sigma- field generated by a random variable X is the smallest subset of \mathbb{F} (i.e. the smallest \sigma-field) that describes the random variable.

The last piece of formalization we need now is to describe systematically the probabilities associated with different values of the random variable.

Probability Distribution

Probability distribution of a random variable is the probability of elements in the \sigma-field generated by the random variable. (Remember that probabilities are assigned to events and not directly to random variables. So, probability of random variable taking some value or lying in a certain interval must correspond to some events in the \sigma-field.)

Consider the random variable X_2 in our example above. The \sigma-field generated by it is \sigma(X_2) = \{\Omega, \varnothing, \{1, 3, 5\}, \{2, 4, 6\}\}, and the probability associated with those are \mathbb{P}(\{1, 3, 5\}) = \mathbb{P}(\mathbb{X}_2^{-1}(1)), i.e. probability when the random variable takes the value 1, \mathbb{P}(\{2, 4, 6\}) = \mathbb{P}(\mathbb{X}_2^{-1}(-1)), \mathbb{P}(\Omega) = \mathbb{P}(\mathbb{X}_2^{-1}(1) \cup \mathbb{X}_2^{-1}(-1)) and \mathbb{P}(\varnothing) = \mathbb{P}(\mathbb{X}_2^{-1}(1) \cap \mathbb{X}_2^{-1}(-1))

It turns out one can summarize the distribution of these probabilities associated with different values taken by the random variable simply as:

\boxed{\mathbb{P}_X = \mathbb{P}(X^{-1}(B))}

where B is any member of the Borel field \mathbb{B}.

For the random variable X_2, then we can use this concise definition to again write the distribution of probabilities associated with values taken by the random variable as:

  • If the Borel set B containes both 1 and -1\mathbb{P}(X_2^{-1}(B)) = \mathbb{P}(\Omega)
  • If the Borel set B containes neither 1 and -1: \mathbb{P}(X_2^{-1}(B)) = \mathbb{P}(\varnothing)
  • If the Borel set B containes only 1 but not -1: \mathbb{P}(X_2^{-1}(B)) = \mathbb{P}(\{1, 3, 5\})
  • If the Borel set B containes only -1 but not 1: \mathbb{P}(X_2^{-1}(B)) = \mathbb{P}(\{2, 4, 6\})

which is what we argued intuitively.

[PS: Definitions above taken from Capinski and Kopp]

Advertisements

Written by Vineet

February 17, 2013 at 1:51 am

One Response

Subscribe to comments with RSS.

  1. […] defined random variables using the measure-theoretic language, to complete the basic set-up we can now define familiar […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: