Back of the Envelope

Observations on the Theory and Empirics of Mathematical Finance

[PGP-I FM] Expected Utility and Valuation of Risky Payoffs

leave a comment »

Having established that expected utility is the quantity to consider when valuing risky gambles, we are ready to start thinking about how to value risky payoffs.

The easiest way to operationalize risk is to break-up future possibilities into ‘things going up’ (optimistic) and ‘things going down’ (pessimistic).

Consider a bet that offers X_H if the coin toss turns up ‘heads’ H, and X_T if it turns up ‘tails’ T. Since the payoffs are arbitrary, we can assume X_H > X_T. Assuming the coin is unbiased, expected value of the payoff X is:

\begin{aligned} \displaystyle E[X] &= \frac{1}{2}X_H + \frac{1}{2}X_T \\&= \frac{X_H + X_T}{2} \end{aligned}

Daniel Bernoulli, however, told us that the way to ‘value’ this payoff is to not to look at E[X], but the utility of this payoff, i.e. E[U(X)]. This is easily calculated as:

\begin{aligned} \displaystyle E[U(X)] &= \frac{1}{2}U(X_H) + \frac{1}{2}U(X_T) \\&= \frac{U(X_H) + U(X_T)}{2} \end{aligned}

Now consider what will be the utility of a payoff that is quantitatively equal to E[X], but instead is a sure thing? Well, that’s simple. Given our utility U(X) of any payoff X, the payoff for the quantity E[X] will be just U(E[X]).

So we have two quantities. E[U(X)] and U(E[X]). Let’s compare the two. We first do it graphically, and then in a separate post later we’ll do it mathematically. (Why do it mathematically too? Well, maths remove all confusions that may arise when drawing lines on a graph, and it is good to be aware how to think about it mathematically.)

Graphically: E[U(X)] vs U(E[X])

(Click on the graph to zoom.)

Bernoulli told us that when valuing gambles we should consider expected utility E[U(X)]) of the risky payoff, and not expected payoff E[X]. The utility of the sure amount (quantitatively speaking) E[X] is U(E[X]) which, as is apparent from the graph (and as we also show mathematically later), is greater than expected utility E[U(X)] .

Further, it is clear that the utility of the sure payoff X^* < E[X] is same as that for the gamble – both are equal to U(X^*).

To reiterate, both utility being quantitatively same means that people might as well take a sure X^* rather than indulge in a gamble that exposes them to the risk of not even getting X^* - X_T. Diminishing marginal utility implies that people worry more about the lost X^* - X_T than they are excited by the gain X_H - X^* . That is, concavity implies risk aversion.

This is a powerful result and forms the foundation for most of the classical finance theory.

The utility of ‘sacrificed’ amount / utility when taking up the sure X^* instead of the gamble U(E[X]) - U(X^*) corresponds to the risk premium (and we’ll have a chance to talk more about this later). An important consequence of the existence of risk premium is that people value risky gambles less than their expected values. The utility of the sure thing X^* is same as the expected utility of the risky E[X] – i.e. the risky gamble is valued at X^* and not at E[X].

We saw in the St. Petersburg Paradox that the expected utility rule tells us that the bet posed by Nicolaus Bernoulli is worth only 4 bucks – much less than its expected value. Now we can claim that concavity / diminishing marginal utility / risk aversion (all mean the same thing) implies that risky payoffs are worth less than their expected values.

This means that the PV of the expected value of a risky gamble would be less than what would it be for a sure thing. So if a rate r applies to the risk-free payoff (X^*), for a risky payoff (X, with expected value E[X]) we’ll have:

\displaystyle PV(X) < \frac{E[X]}{1 + r}

Alternatively,

\displaystyle PV(X) = \frac{E[X]}{1 + r^*}

; r^* > r.

Note that we have not defined r^* yet (this we’ll do it later in the course). We are just saying that r^* corresponds to the distance E[X] - X^* (the risk premium), and in general, higher the value of the distance E[X] - X^*, higher the r^*, i.e. higher the risk premium.

A corollary to this result is that since the utility from X^*, U(X^*), and the expected utility E[U(X)] is the same, the following will hold:

\displaystyle \frac{X^*}{1 + r} = \frac{E[X]}{1 + r^*}

That is we can now compare risky and risk-free payoffs in PV terms as long we use the right discount rates! This means we now have a way to evaluate and compare all kinds of investments – not just the ones with risk-free cash flows but also risky cash flows. Now we are talking!

We are now ready to state our final results:

  1. Risky payoffs are discounted at a rate higher than the risk-free rate. This rate is called the opportunity cost of capital.
  2. We can compare all investments by the PV rule as long as we choose the right discount rate commensurate with the risk of the investments.

Please note that this says nothing about what any individual would consume. You may as well choose the gamble E[X] and I may choose the sure thing X^*. This is not about our individual choices. Go back to the PV rule which said that we can evaluate all investments in PV terms. Our results on valuation of risky payoffs – as a consequence of concavity of utility curves – tell us that we can still use the PV rule as long as we discount our risky payoff at the right discount rate (the opportunity cost of capital).

Written by Vineet

August 20, 2016 at 12:05 pm

[PGP-I FM] St. Petersburg Paradox and Expected Utility

with one comment

You may read in detail about the St. Petersburg Paradox and its history here, but here is the problem Nicolaus Bernoulli posed, which Daniel Bernoulli set out to solve in his book:

St. Petersburg Paradox

Consider a gamble that involves the coin-toss game. You toss a coin, and if you get ‘heads’ (H), you get 2 bucks and the game ends. If you get ‘tails’ (T), however, you keep on playing till you get an H at which point the game ends. Your payoff if H turns up after n coin tosses is 2^n. Nicolaus Bernoulli asked the following question – how much should you pay to play this game?

The first step towards answering this question is to ask what do I expect to win from this gamble?

Because the probability of winning 2 bucks is 1/2 (the probability of H in the first coin toss is \displaystyle 1/2), and that of winning 4 bucks is \displaystyle 1/4(the probability that first toss turns up a T, and the next one H is 1/4), and, in general, that of winning 2^n bucks is 1/2^n.

The expected value, then, of the winnings, say X, from this gamble is:

\displaystyle \begin{aligned} E[X] &= \frac{1}{2} 2 + \frac{1}{4} 2^2 + \frac{1}{8} 2^3 + ... + \frac{1}{2^n} 2^n + ... \\& = 1 + 1 + 1 + 1 + ... \\& \to \infty \end{aligned}

That is, if one were to rely only on math, the answer would be a very large number. As long as there is a finite chance of H turning up after a very very long time (n \to \infty)  – that is there is a positive probability of winning 2^n – math tells us that the expected value is infinity.

However, if you were to ask this to real people, as we did, it turns out this isn’t quite the case. In fact, when Daniel Bernoulli asked his friends this question no one seemed to be willing to pay more than a few ducats. So what is going on here? People aren’t willing to pay even a fraction of what the bet promises, but the expected value, E[X] \to \infty. This is the St. Petersburg Paradox.

Daniel Bernoulli resolved this paradox by saying, and I quote:

The determination of the value of an item must not be based on the price, but rather on the utility it yields…. There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount.

As far back as then Daniel understood our familiar concave utility functions. And thus he argued that we should not be looking at the payoffs per se but what they offer us – that is their utility. So the quantity to be considered should be not E[X], but E[U(X)]. That is, he said we should look at:

\displaystyle E[U(X)]= \frac{1}{2} U(2) + \frac{1}{4} U(2^2) + \frac{1}{8} U(2^3) + ... + \frac{1}{2^n} U(2^n) + ...

So far so good. But we can’t do much with a quantity written in terms of utility functions. It’s hard to quantify what is the expected utility if we just leave it like that. More importantly, it doesn’t still provide the answer to the original question – how much one should pay to play this gamble?

Daniel answered this problem by giving a functional form to the utility curve. He understood that it should be concave, and he obviously also understood its familiar properties – i.e. a pauper values a little bit of money more than a rich man (diminishing marginal utility).

He posited that a reasonable functional form for a well behaved concave utility function is U(X) = ln(X): i.e. the utility of the consumption amount X is natural logarithm of the consumption amount, i.e. ln(X). You may check that it satisfies the properties for a reasonable utility curve (again, diminishing marginal utility).

And what do we get when we put ln(X) in our expected utility equation:

\displaystyle \begin{aligned} E[U(X)] &= \frac{1}{2} U(2) + \frac{1}{4} U(2^2) + \frac{1}{8} U(2^3) + ... + \frac{1}{2^n} U(2^n) + ... \\& = \frac{1}{2} ln(2) + \frac{1}{4} ln(2^2) + \frac{1}{8} ln(2^3) + ... + \frac{1}{2^n} ln(2^n) + ... \\& = \frac{1}{2} ln(2) + \frac{1}{4} (2 ln(2)) + \frac{1}{2} (3 ln(2)) + ... + \frac{1}{2^n} (nln(2)) + ... \\ & = \big (\frac{1}{2} + \frac{2}{4} + \frac{3}{2^3} + \frac{4}{2^4} + \frac{5}{2^5} + ... \big) ln2 \\&= \big( \sum_{n=1}^{\infty} \frac{n}{2^n} \big) ln2 \end{aligned}

where we have used the result ln(x^n) = nln(x).

The term \displaystyle \sum_{i=1}^{\infty} \frac{n}{2^n} is a GP which sums to 2 (WolframAlpha confirms). The expected utility expression then simplifies to:

E[U(X)] = 2ln2 = ln4

That is, if expected utility is a good measure of value of something, one would be willing to pay 4 bucks. And it turns out to be to close to the average price that people are willing to pay to play such a gamble (assuming you are not a Bill Gates, that is). So our textbook choice of a concave utility function is perhaps not that bad a choice after all.

The take-away:

When valuing risky gambles, think in terms of Expected Utility and not Expected Payoff.

Suggested Readings

Daniel Bernoulli’s original ‘book’ (it’s only 15 pages and fun to read): Exposition of a New Theory on the Measurement of Risk. You can find it here.

Gregory Zuckerman, The Greatest Trade Ever: The Behind-the-Scenes Story of How John Paulson Defied Wall Street and Made Financial History. Crown Business, 2009. [John Paulson bet ‘against the market’ – i.e. took a very risky gamble (John probably would disagree) – when the sub-prime boom was at its peak. He made tons of money when the bubble finally burst!]

Written by Vineet

August 20, 2016 at 11:57 am

[PGP-I FM] Time Value of Money

leave a comment »

Chapter 2 of your book has a decent enough description of the mechanics of time value of money, so I won’t spend too much time on this post motivating and explaining it. If you understand the Present Value Rule, then finding the time value of money of future certain cash flow boils down to just simply summing simple geometric progressions (GP). That brings us to the first tip of the day:

Tip 1: When finding time value of money, more often than not you should be able to reduce it to a mathematical problem of valuing finite GPs. Use it!

For this post then I’ll just summarize the math and the main results.

Discount Factors

I include Discount Factors here only to set the notation for this post. There is nothing new here. Given an annual discount rate r the Discount Factor DF_i (with annual compounding) is given as:

\displaystyle DF_i = \frac{1}{(1 + r)^i}

(For much of this post, annual compounding is assumed.)

Perpetuities

A perpetuity (P) is an endless stream of cash flows (C_i) starting at the end of the year i (e.g. Consols). Assuming constant interest rates, mathematically it means summing up the following infinite GP:

\begin{aligned} \displaystyle PV(P) &= \frac{C_1}{1 + r} + \frac{C_2}{(1 + r)^2} + \frac{C_3}{(1 + r)^3} + ... \\& = \sum_{i=1}^{\infty} \frac{C_i}{(1 + r)^i} \end{aligned}

If the problem at hand warrants assuming constant cash flows, i.e. if C_i = C \forall i this simplifies to:

\begin{aligned} \displaystyle PV(P) &= \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + \frac{C}{(1 + r)^3} + ... \\& = \sum_{i=1}^{\infty} \frac{C}{(1 + r)^i} \\&= \frac{C}{r} \end{aligned}

Perpetuities are not so important as financial instruments, but the formula/derivation for PV(P) as above is immensely useful in simplifying calculation of PVs of annuities, as we’ll see below.

Annuities

An annuity (A_n) for a term n is a finite stream of cash flows (C_i) starting at the end of the year i. Annuities can be interpreted both as payments to be made (EMIs, regular savings from your salary, recurring deposits) as well as payments to be received (coupon payments from a bond). Mathematically, of course, it’s just summing up the following finite GP:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C_1}{1 + r} + \frac{C}{(1 + r)^2} + \frac{C_3}{(1 + r)^3} + ... \frac{C_n}{(1 + r)^n} \end{aligned}

First of all notice that we can write an annuity as a function of the discount factors DF_i as:

\begin{aligned} PV(A_n) &= C_1 DF_1 + C_2 DF_2 + ... + C_n DF_n \\&= \sum_{i=1}^{n} C_i DF_i \end{aligned}

So, if we consider all cash flows C_i = 1, then we have the Annuity Factor AF_n for term n as:

\begin{aligned} AF_n &= DF_1 + DF_2 + ... DF_n \\&= \sum_{i=1}^{n} DF_i \end{aligned}

Tip 2: If you have manageable number of terms (say, less than 4-5) in a situation, and you’ve already calculated some of the DFs already as part of the problem, then use the discount factor way to find the PV of the annuity. It’ll not only be easier (i.e. you’ll perhaps make less mistakes) but also save time. Otherwise, use the formula in the textbook/as we derive below.

Just like for perpetuities, if the problem at hand warrants assuming constant cash flows, i.e. if C_i = C \forall ithis simplifies to:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + \frac{C}{(1 + r)^3} + ... \frac{C}{(1 + r)^n} \end{aligned}

Tip 3: Now one way to sum it up is to use the formula for summing finite GPs, but that’s messy – and it will only get worse when you have an annuity which is growing/falling. We exploit our derivation for perpetuity here. Write:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + \frac{C}{(1 + r)^3} + ... \frac{C}{(1 + r)^n} \\& + \big[ \frac{C}{(1 + r)^{n + 1}} + \frac{C}{(1 + r)^{n + 2}} + ... \big] - \big[ \frac{C}{(1 + r)^{n + 1}} + \frac{C}{(1 + r)^{n + 2}} + ... \big] \end{aligned}

where we have just added and subtracted a term going to perpetuity. Write:

\displaystyle A_1 = \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + ... + \frac{C}{(1 + r)^n}

\displaystyle \begin{aligned} A_2 &= \frac{C}{(1 + r)^{n + 1}} + \frac{C}{(1 + r)^{n + 2}} + ... \\& = \frac{1}{(1 + r)^n} \big[ \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + ... \big] \\&= \frac{1}{(1 + r)^n} \sum_{i=1}^{\infty} \frac{C}{(1 + r)^i} \end{aligned}

Clearly A_1 + A_2 is:

\displaystyle \begin{aligned} A_1 + A_2 &= \frac{C}{1 + r} + \frac{C}{(1 + r)^2} + ... + \frac{C}{(1 + r)^n} + \frac{C}{(1 + r)^{n + 1}} + \frac{C}{(1 + r)^{n + 2}} + ... \\&= \sum_{i=1}^{\infty} \frac{C}{(1 + r)^i} \end{aligned}

That is, we can simplify PV(A_n) as:

\begin{aligned} \displaystyle PV(A_n) &= (A_1 + A_2) - (A_2) \\&= \sum_{i=1}^{\infty} \frac{C}{(1 + r)^i} - \frac{1}{(1 + r)^n} \sum_{i=1}^{\infty} \frac{C}{(1 + r)^i} \\&= \frac{C}{r} - \frac{1}{(1 + r)^n} \frac{C}{r} \end{aligned}

Notice how we have used the the’perpetuity A_2 to our advantage to convert our original annuity into perpetuity and then subtracted back A_2.

We have now the final result for PV(A_n):

\displaystyle PV(A_n) = \frac{C}{r} \big[ 1- \frac{1}{(1 + r)^n} \big]

Growing Annuities

If we have a growing annuities (either upwards or downwards), then our expression PV(A^{g}_n) changes to:

\begin{aligned} \displaystyle PV(A^{g}_n) &= \frac{C}{1 + r} + \frac{C(1 + g)}{(1 + r)^2} + \frac{C(1 + g)^2}{(1 + r)^3} + ... \frac{C(1 + g)^{n - 1}}{(1 + r)^n} \end{aligned}

If we take out the factor \displaystyle \frac{1}{1 + r} from the above equation we end up with a finite GP again, but with a different common ratio – in this case \displaystyle \frac{1 + g}{1 + r}.

Repeat the same steps as earlier (write PV(A^{g}_n) again as a difference of two perpetuities – only the common ratio is different) and we end up with a similar result for growing annuities. Let’s outline the proof:

\begin{aligned} \displaystyle PV(A^{g}_n) &= \frac{C}{1 + r} + \frac{C(1 + g)}{(1 + r)^2} + \frac{C(1 + g)^2}{(1 + r)^3} + ... \frac{C(1 + g)^{n - 1}}{(1 + r)^n} \\& + \big[ \frac{C(1 + g)^n}{(1 + r)^{n + 1}} + \frac{C(1 + g)^{n + 1}}{(1 + r)^{n + 2}} + ... \big] - \big[ \frac{C(1 + g)^n}{(1 + r)^{n + 1}} + \frac{C(1 + g)^{n + 1}}{(1 + r)^{n + 2}} + ... \big] \end{aligned}

Again, we can similarly define A^{g}_1 and A^{g}_2, and go through exactly the same steps to show that:

\begin{aligned} \displaystyle A^{g}_1 + A^{g}_2 &= \sum_{i=1}^{\infty} \frac{C(1 + g)^{i - 1}}{(1 + r)^i} \\&= \frac{C}{r - g} \end{aligned}

And we end up with final result as:

\displaystyle PV(A^{g}_n) = \frac{C}{r - g} \big[ 1- \frac{(1 + g)^n}{(1 + r)^n} \big]

Note that there is nothing in the above formula that prevents g from being negative. g can very well be less than 0 – that would be the case where our future cash-flows are going down in a systematic manner.

Some other Things to Remember

1. When there is only a single unit cash flow (i.e. C_n = 1) in the future, the following relationships hold:

\displaystyle PV_n = DF_n

\displaystyle FV_n = \frac{1}{DF_n}

2. For an annuity with constant cash flow every year we can talk about an Annuity Factor (AF_n) as:

\begin{aligned} AF_n &= DF_1 + DF_2 + ... DF_n \\&= \sum_{i=1}^{n} DF_i \end{aligned}

3. In general, one may think of PV of a unit sum of cash flow n years later with compounding k times a year as the following general expression:

PV_n = \displaystyle \frac{1}{(1 + \frac{r}{k})^{nk}}

And accordingly, the general expression for an annuity with a unit sum received k times a year becomes:

\displaystyle \sum_{i = 1}^{nk}  \frac{1}{(1 + \frac{r}{k})^{i}}

The Inflation Problem

This is with reference to a couple of problems around inflation in the textbook. The inflation problem is just an application of the idea of a growing annuity. In presence of inflation, for your consumption to remain the same in ‘real’/quantity terms, annual cash-flows must grow at a rate commensurate with the inflation rate – i.e. the cash-flows must grow that match the inflation rate.

The only difference is that, because our typical annuity starts at the end of the year, unlike the earlier case of a growing annuity, our first annuity payment in this case would also be higher by a rate equivalent to the inflation rate.

So, if the inflation rate is known (that is, we know it’s going to remain the same for the entire horizon), then the expression for this annuity will be:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C(1 + g)}{1 + r} + \frac{C(1 + g)^2}{(1 + r)^2} + \frac{C(1 + g)^3}{(1 + r)^3} + ... \frac{C(1 + g)^n}{(1 + r)^n} \end{aligned}

So, in this case we can factor out \displaystyle \frac{1 + g}{1 + r} and we are again left with a GP with a common ratio \displaystyle \frac{1 + g}{1 + r}. Nothing else changes.

And if we now write

\displaystyle \frac{1 + r}{1 + g} = 1 + \rho

\begin{aligned} \displaystyle \Rightarrow \rho &= \frac{1 + r}{1 + g} - 1 \\& < r \end{aligned}

then we can rewrite PV(A_n) as:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C}{1 + \rho} + \frac{C}{(1 + \rho)^2} + \frac{C}{(1 + \rho)^3} + ... \frac{C}{(1 + \rho)^n} \end{aligned}

which is just a normal annuity, but with a different discount rate \rho. And what is this discount rate \rho? Think of it as the ‘real’ interest rate that applies in the presence of inflation. That is, in presence of inflation our effective discount rate goes down (why?).

The inflation problem is just an application of the idea of a growing annuity. In presence of inflation, for your consumption to remain the same in ‘real’/quantity terms, annual cash-flows must grow at a rate commensurate with the inflation rate – i.e. the cash-flows must grow that match the inflation rate.

The only difference is that, because our typical annuity starts at the end of the year, unlike the earlier textbook case of a growing annuity, our first annuity payment in this case would also be higher by a rate equivalent to the inflation rate.

So, if the inflation rate is known (that is, we know it’s going to remain the same for the entire horizon), then the expression for this annuity will be:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C(1 + g)}{1 + r} + \frac{C(1 + g)^2}{(1 + r)^2} + \frac{C(1 + g)^3}{(1 + r)^3} + ... \frac{C(1 + g)^n}{(1 + r)^n} \end{aligned}

So, in this case we can factor out \displaystyle \frac{1 + g}{1 + r} and we are again left with a GP with a common ratio \displaystyle \frac{1 + g}{1 + r}. Nothing else changes.

And if we now write

\displaystyle \frac{1 + r}{1 + g} = 1 + \rho

\begin{aligned} \displaystyle \Rightarrow \rho &= \frac{1 + r}{1 + g} - 1 \\& < r \end{aligned}

then we can rewrite PV(A_n) as:

\begin{aligned} \displaystyle PV(A_n) &= \frac{C}{1 + \rho} + \frac{C}{(1 + \rho)^2} + \frac{C}{(1 + \rho)^3} + ... \frac{C}{(1 + \rho)^n} \end{aligned}

which is just a normal annuity, but with a different discount rate \rho. And what is this discount rate \rho? Think of it as the ‘real’ interest rate that applies in the presence of inflation. That is, in presence of inflation our effective discount rate goes down (why?).

One Last Tip

When all else fails, use ab initio. For any cash flow that you are supposed to value list down the ‘time-axis’ on the first row (0 --- 1 --- 2 --- 3 --- 4 --- ... --- N), and below that list down all cash flows (with their right sign) of a kind (cash flow from each activity/operation on a separate line/row. This would make sure you’re counting each cash flow at the ‘right’ time (and not more than once). Now just bring back all of them to separately time 0 and sum them up. And you are done.

Variations on the time value of money problems are situations when one knows the PV (your car/home loan for example), and you have to do some algebra/arithmetic to find some regular cash flow (your EMI). At others times when you know the cash flows and you may have to find the PV. (Always try and do the algebra first and then plug in the numbers as a last step – more often than not algebra helps simiplify the final expression and reduce possibility of silly mistakes. Though, of course, this may all be moot, as these days you’ll be probably using some software like Excel to do the job for you. Doesn’t hurt to know the principles though.)

It is an urban legend that Albert Einstein said that compound interest is the most powerful force in the universe – the Indians perhaps didn’t think through it!

Compound Interest

Written by Vineet

August 18, 2016 at 1:08 pm

[PGP-I FM] Foundations of the NPV Rule (Wonkish)

leave a comment »

If you came looking for the summary of session 2 and found this post instead, stop right now and click here.

(So yes, any blog post which you see here with ‘Wonkish’ attached to the title – just like this one – can be safely skipped without any loss of continuity. While such posts would add value in the sense that you’ll hopefully get a better understanding of  the ideas covered, the material would typically be slightly advanced compared to what we are doing in the class. So yes, in that sense, all such posts would be, what to say, well, completely useless as far as your exams etc. are concerned. But, then, people don’t come to WIMWI just because they get to read textbooks, right?)

This post describes a graphical proof of the idea that in presence of a bank, investment and consumption decisions can be separated. Consider it as supplement, and slightly advanced, material related to the Fisher Separation Theorem.  Your take away from this? Well, it shows that what we did using some nice round numbers in the class holds more generally. If you like microeconomics, and have time, go for it. That said, it can be safely skipped without any loss of understanding of what comes later.

‘Proving’ the Fisher Separation Theorem 

Let’s consider the following decision problem:

The world just has two periods – today and tomorrow. Assuming that you have an income today of amount X, the question that is asked is, how much should you consume today?

Before you answer this question, ask – what is my information set? Because your answer would depend on the opportunities you face, right? Do you have access to any production/investment opportunities? Do you have access to bank/capital markets? Do you have access to both?

We start with the simplest case.

Case 1: No Capital Markets, No Production Opportunities

In this case the best you can do is consume whatever you feel like today (your subjective preference for today), and maybe just put the remaining money in a trunk and save for the next day. Whatever you save today is available to you tomorrow for consumption.

So, in this world, for every one unit you save, you have that available to consume tomorrow. If you started out with X, the max you can consume today is X (if you decide nothing to consume tomorrow), and the max you can consume tomorrow is also X (if you decide nothing to consume today).

So, how does my choice problem look like graphically? The budget constraint equation is Y_0 + Y_1 = X, and the slope is -1.

(Click on the graph to zoom)

So, if your indifference curve is of the shape U(C_0, C_1) as above, you just choose any point like A on the budget constraint. Of course you can’t do any better than that, as there is just no access to any other technology – neither in terms of investment/production opportunities, nor in form of a bank/capital markets.

Case 2: Capital Markets but No Production Opportunities

Let’s complicate things a little bit and introduce a bank in the system. That is, now you have the option to not just put your money in the trunk, but also to put it in bank that will earn you an interest at the rate of r \%$$. That is, for every unit of Rupee you put in the bank today, you get (1 + r) tomorrow.

So, in our previous example, if you still decide to consume Y_0 today, now instead you can put that remaining Y_1 in the bank and get Y^{'}_1 = Y_1 (1 + r) tomorrow. That is, you can consume more tomorrow.

So, what changes? Well, nothing but the budget constraint, because the tradeoff that was earlier 1-on-1 (you put 1 unit in the trunk today, and that allows you to consume 1 unit extra tomorrow) has now become 1-on-(1 + r) – that is, for every unit of your income kept aside today you get a little more because the bank pays you interest.

And that extra amount is (1 + r) - 1 = r. So, the budget constraint now rotates around the x-axis, to reach a higher point on the y-axis. That is, the slope which was earlier now -1 is now - (1 + r).

The budget constraint equation now becomes \displaystyle Y^{'}_0 + \frac{Y^{'}_1}{1 + r} = X, and we are clearly better of:

(Click on the graph to zoom)

That is, while earlier we could only reach the utility curve U(C_0, C_1), now, because of the existence of the capital markets, we are able to move to the higher utility curve V(C_0, C_1).

Case 3: Production Opportunities but no Capital Markets

Now, this is a slightly less familiar case of having no bank, but having production opportunities. That is, we also have the option to invest our money now in some venture (instead of a bank).

The problem, however, is that while the production function as you know from your microeconomics course was in the production plane, our choice problem is that of consumption, so we have to first move from the production plane to the consumption plane.

We can do this because we are investing all what we are not consuming today. So, if we consume Y_0 today, we invest X - Y_0 in the production opportunity. And this is the connection that allows us to bring what is there in the production plane onto the consumption plane.

We can do so by taking a mirror image of our production function with respect to the y-axis (i.e. the output f(X)) and we end up with a production function in the consumption plane. (You wanna try this now on your own?)

Because the marginal rate of return from production opportunities is initially high, as long as it is more than our bank rate of return in the beginning stages of investment, we can reach even a higher consumption tomorrow as compared to what was possible in a world where there was only a bank. That is, now, given the technology of production, we can reach a maximum point f(X) instead of just X(1 + r) on the y-axis (tomorrow). And this is how it looks like:

(Click on the graph to zoom)

That is, now we are able to move from a new (higher) tangency point W(C_0, C_1).

(Although one can’t make direct comparisons with the capital markets case, as there is no capital market here, given that a production opportunity potentially offers a higher rate of return than a bank, we may think of it as being better off compared to putting money in a bank ‘had there been one’ where our utility was U(C_0, C_1).)

Case 4: Production Opportunities in Presence of Capital Markets

Ok. Let’s put it all together now. We bring our Case 2 and Case 3 together and get:

(Click on the graph to zoom)

So, while we could reach the utility curve W(C_0, C_1) in Case 3, here we area able to do a little bit better. We are able to move to a higher utility curve at W^*(C_0, C_1). And how does this come about?

Again, as always we break the problem in parts. Tackle it step by step, right. We have the following two decisions to make to solve our consumption problem:

  • How much to put in production opportunities? (Case 3)
  • Of the amount left over, how much to put in the bank (Case 2), and how much to consume today (our subjective preferences)?

We start with the first one.

The Production Decision

We have two places to park our money in. We can either put it in the bank or in production opportunities.

That is, starting at X, our current wealth, we can either do what we did in Case 2 and go along the X \leftrightarrow X (1 + r) schedule, or we can go along X \leftrightarrow P \leftrightarrow f(X) schedule? Which one should we choose?

You know enough microeconomics by now to understand that we should keep investing in one or the other until at the margin we are getting the same return from both. (If you don’t understand this you are on very shaky grounds – time to pick up your micro text.)

We can see that to the right of point P \equiv (P_0, P_1), the marginal rate of return from production opportunities is more than the bank return: slope of the production opportunity curve f(X) is greater than (1 + r), i.e. f'(X) > (1 + r) . And vice-versa for the left of point P.

So, we should go along the path of X \leftrightarrow P and invest in production opportunities till we reach point P. After that because the slope of the production function is less than the bank rate of return, we should not invest in production opportunities. Because at that point it makes more sense to put the money in the bank rather than put in the production opportunities. So, the long and short of the argument is that whatever one’s subjective preferences the maximum we should invest in production opportunities is till we reach point P, i.e. the amount X - P_0.

But what about consumption? Isn’t that our real decision problem?

Borrowing Against Future Income: Present Value

Before we address this question, we need a concept which I am sure none of need to be taught about. Formally speaking, this is what is called the idea of Present Value (PV) in finance.

If the bank gives you a rate of return r, i.e. for every one unit of money you put in the bank you get (1 + r). The bank not only allows you to deposit money, but also to borrow from it. So if you borrow 1 from the bank today, you will have to return (1 + r) tomorrow. Let’s now pose the question: how much do you have to borrow today to return 1 tomorrow?

Well, it’s a simple high school problem. You just scale today’s borrowing by a factor of \frac{1}{1 + r}, i.e. you just borrow \frac{1}{1 + r}, and when the time comes (tomorrow) return \frac{1}{1 + r} \times (1 + r) = 1.

We say, then, that the PV, of 1 is \frac{1}{1 + r}. That is, each unit 1 tomorrow is worth a little less today, and that amount is \frac{1}{1 + r}. An entrepreneur understands this all too well. PV is just an economist’s way of telling a nit’s proverbial refrain:

A bird in the hand is worth two in the bush

To summarize, if we are expecting 1 unit tomorrow, existence of a bank allows us to borrow it’s PV today, which is \frac{1}{1 + r}. When the time comes, we can use our expected income to return the 1 unit we owe to the bank. This is called borrowing against future income.

The Consumption Decision

Ok, to our consumption decision problem. Choosing the production level, or the investment amount, at X - P_0, gives us P_1 tomorrow (as in the graph above).

Let’s use what we just learnt. Assuming we are living in a world without any uncertainties, we know that we are going to get P_1 if we invest X - P_0 today. So, we can now go to the bank and borrow against this future income of P_1. And how much will the bank give us for that?

Well, that’s simple. Because the PV of 1 unit today is \frac{1}{1 + r}, P_1 is worth \frac{P_1}{1 +r} today, and that’s exactly what the bank would be willing to offer us. That is, a bank allows us to increase our today’s consumption by borrowing against tomorrow’s earnings – the PV of P_1, i.e.  \frac{P_1}{1 +r}.

Remember that P is the point where the marginal rate of transformation (MRT) from production just matches the ‘opportunity cost’ of putting that money in the bank. And as we argued, that is the maximum amount of money we should be investing in production opportunities. Any more than that and we are choosing an inferior solution to our production decision problem.

So, the total consumption possible today at the point of optimum production  is:

\displaystyle Z_0 = P_0 + \frac{P_1}{1 +r}

You are welcome to check, but Z_0 is the farthest point we can reach on the x-axis. That is, any other point on the locus of the production function results in a solution that will be to the left of (or inferior to) Z_0.

That is by starting out with the wealth X we have invested X - P_0 which gives P_1 tomorrow, whose PV is just \frac{P_1}{1 + r} = Z_0 - P_0. That is, the maximum possible consumption today is P_0 + (Z_0 - P_0) = Z_0.

Net Present Value

This is now time to introduce another important concept in finance.

How much did we invest? We invested X - P_0 in our production opportunity. And what did we get out of it ? X - P_0 invested today gave us P_1 tomorrow. The PV of the income from investment tomorrow is \frac{P_1}{1 +r}.

The difference between our investment outlay (X - P_0) and the PV of our income from the investment \frac{P_1}{1 +r} is called the Net Present Value, or by its abbreviation, as NPV. And our NPV with investment of X - P_0 is:

\begin{aligned} \displaystyle NPV &= \frac{P_1}{1 +r} - (X - P_0) \\&= \big(P_0 + \frac{P_1}{1 + r}\big) - X \end{aligned}

Again, you can show that any other level of production / investment results in a lower NPV. That is, investment at a point where MRT = - (1 + r) maximises the NPV.

The Last Step: Back to the Consumption Problem

But what if one wants to consume the same amount of money today that one was consuming when there was no bank. That is the amount Y_0 as in Case 3. Well, in presence of a bank one can easily do that. How? Simple – by borrowing against future income. In fact, why just Y_0, with a bank one can actually attain any level of consumption today up to Z_0. And how is that?

Well, one doesn’t have to borrow against all of the future income P_1, right? One can borrow any fraction of the future income. So, if one wants to consume Y_1 tomorrow, one can borrow the PV of P_1 - Y_1, which as you are welcome to check is exactly Y_0 - P_0. This means that the consumption today is P_0 + (Y_0 - P_0) = Y_0 – again, exactly as wanted.

Since choice of Y_0 is arbitrary, to generalize, by borrowing part/full against one’s future income one can reach any point on the P_0 \leftrightarrow Z_0 line.

Of course, one can also do the other way around. If we don’t want to consume P_0 today, we can postpone our consumption by lending – by depositing the money in the bank. Again, the same logic applies.

Remember, we still invest till the point P. And if we want to consume less than P_0 today, we put the remaining money in the bank. (To the left of P, the marginal return from the production opportunity is less than the rate of return from the bank, so it doesn’t make sense to invest if we are to the left of P.)

This gives us the following Fisher Separation Theorem:

In presence of productive opportunities and capital markets, all consumers should choose the investment opportunities that maximises their net present value (the farthest point on abscissa: Z_0) irrespective of their individual subjective preferences. Having selected the level of investment that maximises their net present value (wealth), they should then borrow from / lend to the bank depending on how they want to plan (smooth) their inter-temporal (in between times) consumption.

But what happens at the margin: What if we only have a single investment opportunity – as for example, we did in the class?

So far we have considered a continuous production function – that is, we implicitly assume there is a continuous set of (or infinite) investment opportunities. We then argued that the NPV is maximised at the level of investments where MRT = - (1 + r).

But what happens when we have only a single investment opportunity? What is the optimal investment decision criterion in that case? So what we are essentially saying is that instead of a production function schedule/curve, we have just a point.

Let’s consider two different cases. First is the case where the available investment opportunity, A, lies to the left  of the “bank line”, i.e. as below:

(Click on the graph to zoom)

The second case we consider when it’s the other way around – that is the investment opportunity B lies to the right of the “bank line”, i.e. as below:

(Click on the graph to zoom)

It should be clear to you that the opportunity A is inferior to opportunity B. How? As earlier, just draw the ‘tangency line’ on the available investment opportunity ‘set’. Of course, in this case it would be trivial and would just translate into drawing a line parallel to the “bank line” passing through the points A and B. What do we get? Let’s see:

(Click on the graph to zoom)

So, if we choose investment A, we consume A_0 today, invest X - A_0 and get A_1 tomorrow, and the total PV is X_A. On the other hand if we choose investment B, we consume B_0 today, invest X - B_0 and get B_1 tomorrow, and the total PV is X_B. That is, the NPV from investment in A and B respectively are:

\displaystyle \begin{aligned} NPV_A &=A_0 + \frac{A_1}{1 + r} - X \\&= X_A - X \\& < 0 \end{aligned}

\displaystyle \begin{aligned} NPV_B &=B_0 + \frac{B_1}{1 + r} - X \\&= X_B - X \\& > 0 \end{aligned}

That is, if we choose investment A we end up with a PV less than our original wealth X and that PV is X_A. On the other hand, if we choose investment B we end up with a PV which is more than X and that PV is X_B. Investments of the kind A with NPV  < 0 are clearly not desirable then – we end up with wealth less than what we start out with.

So if have only finite number of investment opportunities to choose from we should only choose the ones that have an NPV > 0.

This is indeed a more realistic situation. Typically a manager would only have select investment opportunities available. In that case, the NPV maximisation rule boils down to a very simple criterion:

Select all investment opportunities that have NPV > 0.

Assumptions in ‘proving’ the Separation Theorem

  1. No uncertainty
  2. All consumers have the same information
  3. There are no transaction costs
  4. One can borrow and lend at the same rate of interest
  5. Capital markets are complete (we’ll see what that exactly means when we study the notion of market efficiency)

To lay the cards on the table, the theorem doesn’t work exactly (or as neatly) when there are transaction costs. Again, assumptions do not always test a theory, its predictions do. And clearly, when managers and entrepreneurs plan their investments and financing decisions their intuition is not too far from the Separation Theorem. And that’s why you should understand what it means and where is it coming from.

Even though we ‘proved’ the theorem in a certain world, it turns out it holds even in the uncertain world if instead of a certain payoff you are willing to substitute its certainty equivalent in the problem. But that’s more than you should worry about at this stage – unless you are one of the two Finance area FPMs in the class :)

Suggested Reading

Jack HirshleiferInvestment Decision Criteria, UCLA Working Paper 365, March, 1985. Available here.

This is as simple a technical note as you can find on the Separation Theorem and it’s written by a pioneer in the discipline. (Assumes a decent understanding of basic microeconomics though.)

Written by Vineet

August 12, 2016 at 3:19 pm

[PGP-I FM] The Present Value Rule

with 3 comments

In our discussion today we considered the following three cases:

  • No Bank, No Investment Opportunity: In absence of a bank, the sum of consumption in both time periods must be equal to the total available wealth. What one consumes is an individual decision, but sum of what is consumed ‘today’ and ‘tomorrow’ must equal total wealth (in our example, Rs. 5000). That is, (Consumption Today) + (Consumption Tomorrow) = 5000
  • Only Bank, No Investment Opportunity: In this case we saw that an impatient person could consume all of Rs. 5000 today, and a patient one one could get Rs. 5500 tomorrow. Meaning that, when there is a bank, there is a ‘reward for waiting’, in that the impatient person can consume an extra Rs. 500 tomorrow. We also argued that in this case it doesn’t make sense to simply add money today and tomorrow, because there is an exchange happening between today and tomorrow and only after adjusting for exchange rate can we compare monies at two different time points. So in this case we would have (1 + Interest Rate) * (Consumption Today) + (Consumption Tomorrow) = 5500. This is one of the most important lessons in finance – that one can’t simply add absolute value of money today and tomorrow, one needs to adjust for interest rate.
  • Both Bank and an Investment Opportunity: Here the lesson was that as long as Net Present Value of the investment opportunity is positive both the patient and the impatient ones should invest and use bank to plan consumption based on their preferences. This is what is called Fisher Separation.

Fisher Separation Theorem

Let’s first state the theorem in words:

Given perfect and complete capital markets, the production decision is governed solely by an objective market criterion (represented by maximizing wealth) without regard to individual subjective preferences that enter into their consumption decision.

That is, the consumer’s problem of determining the optimal level of investment and optimal consumption stream can be separated in two parts:

  • First we choose the investment level that maximizes our wealth. This choice is independent of one’s preferences.
  • And then select the consumption stream based on the maximized wealth

This is what allows managers to invest in projects on their own merit irrespective of the individual shareholders’ preferences. And that happens because existence of capital markets allow shareholders to plan their consumption according to their preferences.

Let’s recap the example we did in the class.

We are given the following:

  • Wealth: 5000
  • Investment opportunity: 2000
  • Interest rate: r = 10\% = 0.1

Graphically, we ended up with a figure like this:

(Click on the graph to zoom.)

For a patient person, the investment opportunities allow him to transform I = 2000 into 4000, and by putting the remaining 3000 into the bank he ends up 7300 at the end of the period.

For an impatient person there is an extra step of going to the bank to borrow against the promised 4000 after the investment, but she also ends up richer. How? Out of the 5000 that she has, she gives 2000 to her friend for the business, and on her friend’s credibility goes to the bank and borrows against the 4000 that she knows she will get from the business (which she can then return to the bank). So, in total she ends up with: 3000 + \displaystyle \frac{4000}{1.1} = 6636.

So both our richer by 1800 and 1636 respectively. But then, as the graph above shows, they are equivalent. Being richer tomorrow by 1800 is equivalent to being richer by 1636 today.

Of course, we could have also found this by completely ignoring the preferences, and simply considering investment in its own right:

\begin{aligned} NPV &= -\mbox{Investment} + \frac{\mbox{Cash flow from investment}}{1.1} &= -2000 + \frac{4000}{1.1} &= 1636 \end{aligned}

So, if we apply the NPV rule, we should accept this opportunity, and then plan our consumption anywhere along line P in the graph above by borrowing from (or lending to) the bank.

What follows is more than what we did in the class, and can be safely ignored. For those interested, carry on.

In general, if a consumer earns in both periods (let’s say a salary of Y_0 and Y_1) and also consumes in both periods (let’s say amount C_0 and C_1) then we have the following identity:

\displaystyle C_0 + \frac{C_1}{1 + r} = Y_0 + \frac{Y_1}{1 + r}

which is simply another way of saying that:

\displaystyle \mbox{Present Value of Total Consumption} = \mbox{Present Value of Total Income}

As you can imagine, this generally holds for all kinds of income/consumption patterns, and not just over two years. Loosely speaking, ala physics you can think of it as a kind of a ‘conservation equation’.

The Consumption Decision

We may also use the above equation to find our consumption pattern given our income in both periods.

As an example for the numbers we did in the class, we can think of it as, Y_0 = 3000 today and Y_1 = 4000 tomorrow (one of the cases that we considered in the class, as those awake would recall -:) ). Then if we want to consume P_0 = 4500 today, then we can find out the consumption tomorrow as:

\displaystyle \begin{aligned} P_1 &=Y_1 + (Y_0 - P_0) (1 + r) \\&= 4000 + (3000 - 4500) \times 1.1 \\&= 2350 \end{aligned}

And you can check that 4500 \times 1.1 + 2350 indeed equals 7300.

Next we’ll spend time learning the mechanics of time value of money – the Chapter 2 of the book essentially. Most of it relies on knowing your high school simple and compound interest, and summing up simple geometric series. Try and also recall how the Euler’s number, e, comes about as the limit \lim_{n\to\infty} \left( 1 + \frac{1}{n} \right)^n.

We’ll just put it all together in the context of valuing certain future cash flows.

Written by Vineet

August 12, 2016 at 3:06 pm

[PGP-I FM] Stories

leave a comment »

While most of you seemed rather groggy today (unsurprisingly, more so in ‘C’ than in ‘A’ ), but whether or not you plugged in to what was going on, it should be obvious to you all that finance should matter to you.

Most of you are here with a goal to make good money; you spend money, and also know a thing or two about investing in the future. The fact that you are here at WIMWI is a testimony to that. It also shows that you understand the risks implicit in one’s choices. How is that? By choosing to come here you are sacrificing not only two years of your earnings (your opportunity cost), but you are also taking a ‘risk’ in ending up with a future that maybe sub-optimal in an economic sense.

At the same time, given the ‘signal’ that the ‘brand name’ of WIMWI provides,  I am sure that is the least of your worries. If anything, the fact that you choose to come here is an indication that you value a future that is less risky. Getting education in a place like WIMWI pretty much ensures that you’ll belong to the upper half of the tail of the income distribution whichever field you choose to work in. That is, you have chosen an alternative in life that potentially reduces your sensitivity to future risks. This is not to judge your choice, but to make you aware that you all ‘practice’ finance as you live your life even now. What we’ll do in this course would be to provide a context to your individual (economic) stories; provide you a ‘way’ to systematically and constructively think about making financial decisions.

Finance is a way of doing things. It is a technology. We are all familiar with the technology of calculus. We don’t think twice about taking derivatives – even though at a fundamental level it is actually a limit. So, just like knowing the language/tools of calculus allows us to find slopes of functions, velocity of an object and what-not, in this course we’ll build the language/tool-kit of finance.

Let’s recap the some of the examples/teasers that we posed today, and then some:

  1. Lottery: Say, you are lucky to end up with an unexpected windfall of money from a lottery or an endowment from your friends/family. And, say, you are given two options (that’s typically how lotteries get paid off) – get Rs. 20 lacs now or get it in 20 instalments of Rs. 1.5 lac each every year for the next 20 years. How do you answer this question? And more importantly, how do you think about answering such questions?
  2. Lottery and an investment opportunity: Say, you win the lottery but your reward is that you can either come to WIMWI for two years, or have your own start-up? What should you do? What’s the right answer? Again, finance not only tells you how to think about such questions using the language of finance, it also tells you that the answer to this question is same for everybody. Now, isn’t it amazing! It says whatever you preferences (read: utility curves) are – whatever they may be – the answer to this question is independent of that, and should be the same for everybody. We’ll see this result come out very soon.
  3. Retirement problem: Look forward and imagine you are 40, and by now a big-shot consultant with a 15 year’s experience behind you and earn as much a crore a year by then. You say you’ve had enough of slogging your back off and just want to retire by the time you are 50. How should you plan your retirement – that is, how much you should start saving/investing every year so that you have the same standard of living till the time you expect to live?
  4. Insurance: Why on earth does it make sense for insurance firms to pay your medicine and accident bills when clearly you don’t pay anything close to the insured amount as premium? Yes, it’s because of the  risk-pooling. The fact that risk can be spread across a bunch of people makes insurers willing to take on your individual risk. But think about the situations like the Japan’s nuclear reactor accident? And what about AIG who were insuring those contingencies with things like Credit Default Swaps?
  5. Valuing start-ups: Say, being a big shot MBA and all that, find that some of your scientist friends have come up with a cure for cancer. And your friends convince you that drug could really work on humans even though there is a probability attached to it. But you know that if the drug works then you are in for a lot of money – the market clearly is huge – and you value your firm close to that like a Dr. Reddy’s or a Ranbaxy. Does it make sense? Or should your start-up be worth less given the possibility of the drug failing? Could it be more? Should it?
  6. Betting Markets: Say, you are a bookie taking bets for the on-going India – West Indies test series. The experts believe that the odds for the series are stacked in India’s favour as 60:40 (and it shows by now doesn’t it). But some gullible fellow comes and offers you 300,00 INR, with the odds that he be given 200,000 if India draws the series. Clearly the odds are in your favor. But the thought of losing 200,000 makes you jittery. You also have some friends who are willing to bet on these odds but much smaller amounts. What should your strategy be? Should you take up that gullible fellow’s offer knowing very well you hate the idea of losing as big an amount as 200,000. Well, again, there is a unique answer to that question that finance provides. And you’d be surprised that the ‘right’ strategy makes sure you do not lose a single Rupee whoever wins.

There are just some examples of the problems that we can tackle using the technology of finance. But is there anything that is common to all these problems? Of course, the common element is finance itself, but that’s tautological. So what is it?

Well, other than the fact that all problems are about money obviously, all of them have two common elements: that of time and uncertainty. Yes, that’s what finance – well, the investment part of it in any case, helps us do. Whenever we are faced with a cash flow/monetary problem where there is both an element of time as well as that of uncertainty/risk, finance provides us with a kit to address these questions.

And, yes, this is relevant across management disciplines – for marketing (what product mix, how much to invest in advertising?) as well as for operations (how much inventory to hold?). Of course, the context decides the extent of the problem, but clearly the notion of time and uncertainty is implicit in all kinds of investments decisions across firms and across industries.

But isn’t that all of our examples posed above affect only a percent of our economy (in GDP terms)? For most of the population the only financial institution they are typically concerned with is their neighborhood bank. So what about consumption? Isn’t it true that at a very basic level all our decisions are about consumption? As we’ll see, the existence of financial markets allow not just firms but all consumers to be better off.

There is this beautiful Separation Theorem given by Irving Fisher that financial markets allow firm to separate their investment and financing decisions and consumers to separate their investment and consumption decisions. (This is one of the three separation theorems you’ll study in finance – the other two are the Mutual Fund Separation Theorem and Modigliani-Miller Capital Structure Irrelevance Theorem).

It is great because it allows firms to worry about investments on their merit without worrying about how the money is going to come from, and allow you to come to WIMWI (again, on its own merit) without worrying about where the money for that is going to come from. We’ll ‘prove’ that later (well, not really prove prove, but kinda show that it holds). Just remember to revisit your MRSs and MRTs.

Some other things we talked about today:

Principal-Agent Problem in Finance

Shareholders (principal) own the firm, but managers (agent) run it. Their incentives, however, are not aligned, or needn’t be so. Shareholders want to maximize the value of their holdings (read: the stock price), but managers worry about their survival, their bonus/promotion, and short-term profits. This, however, may lead agents to select investment opportunities that may be beneficial to them in the short-run, but in the long-run may reduce shareholder value. This ‘moral hazard‘ is just one example of the principal-agent problem in corporate finance.

So, to make sure managers work in shareholders’ interests, the principals must monitor the agents. The resultant monitoring costs are called agency costs. Some of the way shareholders/markets have devised to reduce these costs is by taking measures like:

  • Choosing Board of Directors who select the top management and ensure that the goals of the guys they recruit (to run the firm) are aligned with those of the shareholders. The top management then selects the senior management and so on and so forth.
  • Linking a substantial portion of the senior/top management’s pay to the firm’s performance / share price.
  • Providing stock options which are directly linked to the firm’s value.

Also, competition in the job market ensures that managers work in the shareholders’ interests and thereby increase their ‘market value’ in the eyes of the other companies’ top management / Board of Directors. May be there is a lesson for each of you in this – in how you plan your career path. Whether to maximize short-term gain by changing jobs quickly, or work ‘for’ the firm you are working in so that you are considered a future CEO material.

The principal-agent problem is an example of the consequences of information asymmetry in economics. One of the other examples that I guess you would be studying in your microeconomics course is the agency problem, where, for example, the owner of a commodity has more information than the seller. So, the buyer believes that he/she is getting passed on an inferior product, and the seller wants to hide the deficiencies, so the market fails to clear. (The guy who first systematically tackled this problem, George Akerlof, won the Nobel Prize for his efforts). This problem (and his paper) also goes by the name of ‘Market for Lemons‘. (Old defective cars are also called lemons in the US.) The similar concerns of moral hazard apply when people buy medical insurance.

But what if you are in a non-finance Firm: Where do financial markets come in? Why should you worry?

At this point you may ask – all that separation theorem and the principal-agent problem and all that is ok, but why should financial markets matter to say, a marketing or an inventory manager in a non-financial firm?

Well, it should. In any firm, or industry you are employed in as a manager, you’ll be asked to make decisions where you’ll have to worry about what your competitors and suppliers are doing. You may hire a team of analysts in your department to that for you, but that would be highly inefficient. That’s not your ‘business’, right. These things are best left for specialized firms. And market price of securities of your competitors and suppliers contain that information (well, for the most part) – so for many a purposes how your competitor is doing could be told by just looking at their market performance.

A more direct example would be to consider, say, you are employed for a firm that serves its customers in global markets. In that case you have to worry about foreign exchange markets. And similarly if you work for an operations firm should (and do) worry about the happenings in the commodities markets. And there is the whole issue of hedging against the risk of things that you don’t want to worry about, or you are not an expert on. So for pure risk management purposes, even derivatives market should matter to you. As to how and why all this is true we’ll see in second half of the course.

Suggested Readings

  1. Models by Emanuel Derman
  2. My Life as a Quant by Emanuel Derman
  3. Capital Ideas by Peter Bernstein

PS: One of the other things we’ll impress upon throughout the course is the importance of simplicity of models (and it shows in the popularity of ideas which go as far back as to Irving FisherJohn Burr Williams and Harry Markovitz which are still in use today). As the boss says, ‘don’t try for a home-run when you can get the job done with a hit’.

Written by Vineet

August 11, 2016 at 2:06 pm

Posted in Teaching: FM

Tagged with ,

[WP] Open access temptations

leave a comment »

Information science isn’t exactly my area of research, but it has been fun exploring the shady side of open access. Here is the abstract:

Backlash against “megapublishers” which began in mathematics a decade ago has led to an exponential growth in open access journals. Their increasing numbers and popularity notwithstanding, there is evidence that not all open access journals are legitimate. The nature of the “gold open access” business model and increasing prevalence of “publish or perish” culture in academia has given rise to a dark under-belly in the world of scientific publishing which feeds off academics’ professional needs. Many such “predatory” publishers and journals not only seem to originate out of India but also seem to have been patronized by academics in the country. This article is a cautionary note to early-career academics and administrators in India to be wary of this “wild west” of the internet and exercise due discretion when considering/evaluating open-access journals for scholarly contributions.

The working paper is here.

Written by Vineet

April 7, 2016 at 3:14 pm

Posted in Research

Tagged with , ,

Follow

Get every new post delivered to your Inbox.

Join 115 other followers