Back of the Envelope

Observations on the Theory and Empirics of Mathematical Finance

[PDS] Variance of Increments of Brownian Motion

leave a comment »

Proof that Variance of Increments of Brownian Motion is \Delta t

Consider the Brownian Motion X(t) at times X(t_{j - 1}) and X(t_j). We can interpret the value of the Brownian Motion at these times as limits of a discrete time Random Walk at t_{j - 1} and t_{j} with payoff scaling as \sqrt{\frac{t_{j - 1}}{n}} and \sqrt{\frac{t_j}{n}} respectively, i.e.:

\begin{aligned} X(t_{j - 1}) &= \lim_{n \to \infty} \big[S_n \big] \Big\rvert_{\mbox{\small Scaling = }\sqrt{\frac{t_{j - 1}}{n}}} \\ X(t_j) &= \lim_{n \to \infty} \big[S_n \big] \Big\rvert_{\mbox{\small Scaling = }\sqrt{\frac{t_j}{n}}} \end{aligned}

where the Variance of the Brownian Motion till times t_{j - 1} and t_j is respectively given as:

\begin{aligned} Var[X(t_{j -1})] &= \lim_{n \to \infty} Var\big[S_n \big] \Big\rvert_{\mbox{\small Scaling = }\sqrt{\frac{t_{j - 1}}{n}}} \\ \\ &=\lim_{n \to \infty} \Big[n \times \Big(\sqrt{\frac{t_{j - 1}}{n}} \Big)^2 \Big] \\ \\ &= \lim_{n \to \infty} [n \times \frac{t_{j - 1}}{n}] \\ \\&= t_{j - 1}\end{aligned}

\begin{aligned} Var[X(t_{j})] &= \lim_{n \to \infty} Var\big[S_n \big] \Big\rvert_{\mbox{\small Scaling = }\sqrt{\frac{t_{j }}{n}}} \\ \\ &=\lim_{n \to \infty} \Big[n \times \Big(\sqrt{\frac{t_{j }}{n}} \Big)^2 \Big] \\ \\ &= \lim_{n \to \infty} [n \times \frac{t_{j}}{n}] \\ \\&= t_{j}\end{aligned}

The increment in the Brownian Motion \Delta X(t) over time \Delta t = t_j - t_{j - 1} is a difference of the two limits, i.e.:

\begin{aligned} \Delta X(t) &= X(t_j) - X(t_{j - 1}) \\ \\ &= \Big (\lim_{n \to \infty}\big[S_n \big] \Big\rvert_{\sqrt{\frac{t_j}{n}}}\Big) - \Big(\lim_{n \to \infty} \big[S_n \big]\Big\rvert_{\sqrt{\frac{t_{j - 1}}{n}}}\Big)\end{aligned}

We are interested in the Variance of the increment of the Brownian Motion:

\begin{aligned} Var[\Delta X(t)] &= Var[X(t_j) - X(t_{j - 1})] \\ \\&= Var[X(t_j)] + Var[X(t_{j - 1})] - 2 Cov[X(t_j), X(t_{j - 1})] \\ \\&= t_j + t_{j - 1} - 2 Cov[(X(t_{j - 1}) + \Delta X(t)), X(t_{j - 1})] \\ \\ &= t_j + t_{j - 1} - Cov[X(t_{j - 1}), X(t_{j - 1})] - 2 Cov[X(t_{j - 1}), \Delta X(t)] \\ \\&= t_j + t_{j - 1} - 2 Var[X(t_{j - 1})] \\ \\&= t_j + t_{j - 1} - 2 t_{j - 1} \\ \\&= t_j - t_{j - 1} \\ \\ \Rightarrow Var[\Delta X(t)] &= \Delta t\end{aligned}

where in the above steps we have used the following:

\begin{aligned} X(t_j) &= X(t_{j - 1})+ \Delta X(t) \hspace{2pc} \mbox{[Step 3]} \\ \\Cov[X(t_j), X(t_{j - 1})] &= Cov[(X(t_{j - 1}) + \Delta X(t)), X(t_{j - 1})] \\ \\&= Cov[X(t_{j - 1}), X(t_{j - 1})] + Cov[X(t_{j - 1}), \Delta X(t)] \\ \\Cov[(X(t_{j - 1}), \Delta X(t)] &= 0 \hspace{9pc} \mbox{[Step 5]} \end{aligned}

\mbox{}

NB: Cov[(X(t_{j - 1}), \Delta X(t)] = 0 in Step 5 holds because, by construction, the Random Walk from time t_{j - 1} to time t_j is independent of the Random Walk till time t_{j - 1}.

Advertisements

Written by Vineet

January 17, 2013 at 2:11 am

Posted in Teaching: PDS

Tagged with

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: