## [PGP-I FM] Expected Utility and Valuation of Risky Payoffs

Having established that expected utility is the quantity to consider when valuing risky gambles, we are ready to start thinking about how to value risky payoffs.

The easiest way to operationalize risk is to break-up future possibilities into ‘things going up’ (optimistic) and ‘things going down’ (pessimistic).

Consider a bet that offers if the coin toss turns up ‘heads’ , and if it turns up ‘tails’ . Since the payoffs are arbitrary, we can assume . Assuming the coin is unbiased, expected value of the payoff is:

Daniel Bernoulli, however, told us that the way to ‘value’ this payoff is to not to look at , but the utility of this payoff, i.e. . This is easily calculated as:

Now consider what will be the utility of a payoff that is quantitatively equal to , but instead is a sure thing? Well, that’s simple. Given our utility of any payoff , the payoff for the quantity will be just .

So we have two quantities. and . Let’s compare the two. We first do it graphically, and then in a separate post later we’ll do it mathematically. (Why do it mathematically too? Well, maths remove all confusions that may arise when drawing lines on a graph, and it is good to be aware how to think about it mathematically.)

…

**Graphically:** vs

Bernoulli told us that when valuing gambles we should consider expected utility of the risky payoff, and not expected payoff . The utility of the sure amount (quantitatively speaking) is which, as is apparent from the graph (and as we also show mathematically later), is greater than expected utility .

Further, it is clear that the utility of the sure payoff is same as that for the gamble – both are equal to .

To reiterate, both utility being quantitatively same means that people might as well take a sure rather than indulge in a gamble that exposes them to the risk of not even getting . Diminishing marginal utility implies that people worry more about the lost than they are excited by the gain . That is,** concavity implies risk aversion**.

This is a powerful result and forms the foundation for most of the classical finance theory.

The utility of ‘sacrificed’ amount / utility when taking up the sure instead of the gamble ) corresponds to the **risk premium** (and we’ll have a chance to talk more about this later). An important consequence of the existence of risk premium is that **people value risky gambles less than their expected values. **The utility of the sure thing is same as the expected utility of the risky ** – **i.e. the risky gamble is valued at and not at .

We saw in the St. Petersburg Paradox that the expected utility rule tells us that the bet posed by Nicolaus Bernoulli is worth only bucks – much less than its expected value. Now we can claim that concavity / diminishing marginal utility / risk aversion (all mean the same thing) implies that **risky payoffs are worth less than their expected values.**

This means that the PV of the expected value of a risky gamble would be less than what would it be for a sure thing. So if a rate applies to the risk-free payoff (), for a risky payoff (, with expected value ) we’ll have:

Alternatively,

.

Note that we have *not defined* yet (this we’ll do it later in the course). We are just saying that corresponds to the distance (the risk premium), and in general, higher the value of the distance , higher the , i.e. higher the risk premium.

A corollary to this result is that since the utility from , , and the expected utility is the same, the following will hold:

That is **we can** now **compare risky and risk-free payoffs** **in** **PV** **terms** as long we use the right discount rates! This means we now have a way to evaluate and compare *all kinds* of investments – not just the ones with risk-free cash flows but also risky cash flows. Now we are talking!

We are now ready to state our final results:

**Risky payoffs are discounted at a rate higher than the risk-free rate. This rate is called the opportunity cost of capital.****We can compare all investments by the PV rule as long as we choose the right discount rate commensurate with the risk of the investments.**

*Please note that this says nothing about what any individual would consume. You may as well choose the gamble and I may choose the sure thing . This is not about our individual choices. Go back to the PV rule which said that we can evaluate all investments in PV terms. Our results on valuation of risky payoffs – as a consequence of concavity of utility curves – tell us that we can still use the PV rule as long as we discount our risky payoff at the right discount rate (the opportunity cost of capital).*

## [PGP-I FM] St. Petersburg Paradox and Expected Utility

You may read in detail about the St. Petersburg Paradox and its history here, but here is the problem Nicolaus Bernoulli posed, which Daniel Bernoulli set out to solve in his book:

*St. Petersburg Paradox*

Consider a gamble that involves the coin-toss game. You toss a coin, and if you get ‘heads’ (), you get bucks and the game ends. If you get ‘tails’ (), however, you keep on playing till you get an at which point the game ends. Your payoff if turns up after coin tosses is . Nicolaus Bernoulli asked the following question – how much should you pay to play this game?

The first step towards answering this question is to ask what do I expect to win from this gamble?

Because the probability of winning bucks is (the probability of in the first coin toss is ), and that of winning bucks is (the probability that first toss turns up a , and the next one is ), and, in general, that of winning bucks is .

The expected value, then, of the winnings, say , from this gamble is:

That is, if one were to rely only on math, the answer would be a very large number. As long as there is a finite chance of turning up after a very very long time () – that is there is a positive probability of winning – math tells us that the expected value is infinity.

However, if you were to ask this to real people, as we did, it turns out this isn’t quite the case. In fact, when Daniel Bernoulli asked his friends this question no one seemed to be willing to pay more than a few ducats. So what is going on here? People aren’t willing to pay even a fraction of what the bet promises, but the expected value, . This is the St. Petersburg Paradox.

Daniel Bernoulli resolved this paradox by saying, and I quote:

The determination of the value of an item must not be based on the price, but rather on the utility it yields…. There is no doubt that a gain of one thousand ducats is more significant to the pauper than to a rich man though both gain the same amount.

As far back as then Daniel understood our familiar concave utility functions. And thus he argued that we should not be looking at the payoffs *per se *but what they offer us – that is their utility. So the quantity to be considered should be not , but . That is, he said we should look at:

So far so good. But we can’t do much with a quantity written in terms of utility functions. It’s hard to quantify what is the expected utility if we just leave it like that. More importantly, it doesn’t still provide the answer to the original question – how much one should pay to play this gamble?

Daniel answered this problem by giving a functional form to the utility curve. He understood that it should be concave, and he obviously also understood its familiar properties – i.e. a pauper values a little bit of money more than a rich man (diminishing marginal utility).

He posited that a reasonable functional form for a well behaved concave utility function is : i.e. the utility of the consumption amount is natural logarithm of the consumption amount, i.e. . You may check that it satisfies the properties for a reasonable utility curve (again, diminishing marginal utility).

And what do we get when we put in our expected utility equation:

where we have used the result .

The term is a GP which sums to (WolframAlpha confirms). The expected utility expression then simplifies to:

That is, if expected utility is a good measure of value of something, one would be willing to pay bucks. And it turns out to be to close to the average price that people are willing to pay to play such a gamble (assuming you are not a Bill Gates, that is). So our textbook choice of a concave utility function is perhaps not that bad a choice after all.

*The take-away*:

**When valuing risky gambles, think in terms of Expected Utility and not Expected Payoff.**

**…**

**Suggested Readings**

Daniel Bernoulli’s original ‘book’ (it’s only 15 pages and fun to read): *Exposition of a New Theory on the Measurement of Risk*. You can find it here.

Gregory Zuckerman, *The Greatest Trade Ever: The Behind-the-Scenes Story of How John Paulson Defied Wall Street and Made Financial History*. Crown Business, 2009. [John Paulson bet ‘against the market’ – i.e. took a very risky gamble (John probably would disagree) – when the sub-prime boom was at its peak. He made tons of money when the bubble finally burst!]

## [PGP-I FM] Time Value of Money

Chapter 2 of your book has a decent enough description of the mechanics of time value of money, so I won’t spend too much time on this post motivating and explaining it. If you understand the Present Value Rule, then finding the time value of money of future certain cash flow boils down to just simply summing simple geometric progressions (GP). That brings us to the first tip of the day:

*Tip 1: *When finding time value of money, more often than not you should be able to reduce it to a mathematical problem of valuing finite GPs. *Use it!*

For this post then I’ll just summarize the math and the main results.

…

**Discount Factors**

I include Discount Factors here only to set the notation for this post. There is nothing new here. Given an annual discount rate the Discount Factor (with annual compounding) is given as:

(For much of this post, annual compounding is assumed.)

…

**Perpetuities**

A perpetuity () is an endless stream of cash flows () starting at the end of the year (e.g. Consols). Assuming constant interest rates, mathematically it means summing up the following infinite GP:

If the problem at hand warrants assuming constant cash flows, i.e. if this simplifies to:

Perpetuities are not so important as financial instruments, but the formula/derivation for as above is immensely useful in simplifying calculation of PVs of annuities, as we’ll see below.

…

**Annuities**

An annuity () for a term is a finite stream of cash flows () starting at the end of the year . Annuities can be interpreted both as payments to be made (EMIs, regular savings from your salary, recurring deposits) as well as payments to be received (coupon payments from a bond). Mathematically, of course, it’s just summing up the following finite GP:

First of all notice that we can write an annuity as a function of the discount factors as:

So, if we consider all cash flows , then we have the Annuity Factor for term as:

*Tip 2:* If you have manageable number of terms (say, less than 4-5) in a situation, and you’ve already calculated some of the already as part of the problem, then use the discount factor way to find the PV of the annuity. It’ll not only be easier (i.e. you’ll perhaps make less mistakes) but also save time. Otherwise, use the formula in the textbook/as we derive below.

Just like for perpetuities, if the problem at hand warrants assuming constant cash flows, i.e. if this simplifies to:

*Tip 3: *Now one way to sum it up is to use the formula for summing finite GPs, but that’s messy – and it will only get worse when you have an annuity which is growing/falling. We exploit our derivation for perpetuity here. Write:

where we have just added and subtracted a term going to perpetuity. Write:

Clearly is:

That is, we can simplify as:

Notice how we have used the the’perpetuity to our advantage to convert our original annuity into perpetuity and then subtracted back .

We have now the final result for :

…

**Growing Annuities**

If we have a growing annuities (either upwards or downwards), then our expression changes to:

If we take out the factor from the above equation we end up with a finite GP again, but with a different common ratio – in this case .

Repeat the same steps as earlier (write again as a difference of two perpetuities – only the common ratio is different) and we end up with a similar result for growing annuities. Let’s outline the proof:

Again, we can similarly define and , and go through exactly the same steps to show that:

And we end up with final result as:

Note that there is nothing in the above formula that prevents from being negative. can very well be less than 0 – that would be the case where our future cash-flows are going down in a systematic manner.

…

**Some other Things to Remember**

1. When there is only a single unit cash flow (i.e. ) in the future, the following relationships hold:

2. For an annuity with constant cash flow every year we can talk about an Annuity Factor () as:

3. In general, one may think of PV of a unit sum of cash flow years later with compounding times a year as the following general expression:

And accordingly, the general expression for an annuity with a unit sum received times a year becomes:

…

**The Inflation Problem**

This is with reference to a couple of problems around inflation in the textbook. The inflation problem is just an application of the idea of a growing annuity. In presence of inflation, for your consumption to remain the same in ‘real’/quantity terms, annual cash-flows must grow at a rate commensurate with the inflation rate – i.e. the cash-flows must grow that match the inflation rate.

The only difference is that, because our typical annuity starts at the end of the year, unlike the earlier case of a growing annuity, our first annuity payment in this case would also be higher by a rate equivalent to the inflation rate.

So, if the inflation rate is known (that is, we know it’s going to remain the same for the entire horizon), then the expression for this annuity will be:

So, in this case we can factor out and we are again left with a GP with a common ratio . Nothing else changes.

And if we now write

then we can rewrite as:

which is just a normal annuity, but with a different discount rate . And what is this discount rate ? Think of it as the ‘real’ interest rate that applies in the presence of inflation. That is, in presence of inflation our effective discount rate goes down (why?).

The inflation problem is just an application of the idea of a growing annuity. In presence of inflation, for your consumption to remain the same in ‘real’/quantity terms, annual cash-flows must grow at a rate commensurate with the inflation rate – i.e. the cash-flows must grow that match the inflation rate.

The only difference is that, because our typical annuity starts at the end of the year, unlike the earlier textbook case of a growing annuity, our first annuity payment in this case would also be higher by a rate equivalent to the inflation rate.

So, if the inflation rate is known (that is, we know it’s going to remain the same for the entire horizon), then the expression for this annuity will be:

So, in this case we can factor out and we are again left with a GP with a common ratio . Nothing else changes.

And if we now write

then we can rewrite as:

which is just a normal annuity, but with a different discount rate . And what is this discount rate ? Think of it as the ‘real’ interest rate that applies in the presence of inflation. That is, in presence of inflation our effective discount rate goes down (why?).

…

**One Last Tip**

When all else fails, use* ab initio*. For any cash flow that you are supposed to value list down the ‘time-axis’ on the first row (), and below that list down all cash flows (with their right sign) of a kind (cash flow from each activity/operation on a separate line/row. This would make sure you’re counting each cash flow at the ‘right’ time (and not more than once). Now just bring back all of them to separately time and sum them up. And you are done.

Variations on the time value of money problems are situations when one knows the PV (your car/home loan for example), and you have to do some algebra/arithmetic to find some regular cash flow (your EMI). At others times when you know the cash flows and you may have to find the PV. (*Always* try and do the algebra first and then plug in the numbers as a last step – more often than not algebra helps simiplify the final expression and reduce possibility of silly mistakes. Though, of course, this may all be moot, as these days you’ll be probably using some software like Excel to do the job for you. Doesn’t hurt to know the principles though.)

…

It is an urban legend that Albert Einstein said that compound interest is the most powerful force in the universe – the Indians perhaps didn’t think through it!

## [PGP-I FM] Foundations of the NPV Rule (Wonkish)

If you came looking for the summary of session 2 and found this post instead, stop right now and click here.

(So yes, any blog post which you see here with ‘Wonkish’ attached to the title – just like this one – can be safely skipped without any loss of continuity. While such posts would add value in the sense that you’ll hopefully get a better understanding of the ideas covered, the material would typically be slightly advanced compared to what we are doing in the class. So yes, in that sense, all such posts would be, what to say, well, completely useless as far as your exams etc. are concerned. But, then, people don’t come to WIMWI just because they get to read textbooks, right?)

…

This post describes a graphical proof of the idea that in presence of a bank, investment and consumption decisions can be separated. Consider it as supplement, and slightly advanced, material related to the Fisher Separation Theorem. Your take away from this? Well, it shows that what we did using some nice round numbers in the class holds more generally. If you like microeconomics, and have time, go for it. That said, it can be safely skipped without any loss of understanding of what comes later.

…

**‘Proving’ the Fisher Separation Theorem **

Let’s consider the following decision problem:

The world just has two periods – today and tomorrow. Assuming that you have an income today of amount , the question that is asked is, how much should you consume today?

Before you answer this question, ask – what is my information set? Because your answer would depend on the opportunities you face, right? Do you have access to any production/investment opportunities? Do you have access to bank/capital markets? Do you have access to both?

We start with the simplest case.

…

*Case 1: No Capital Markets, No Production Opportunities*

In this case the best you can do is consume whatever you feel like today (your subjective preference for today), and maybe just put the remaining money in a trunk and save for the next day. Whatever you save today is available to you tomorrow for consumption.

So, in this world, for every one unit you save, you have that available to consume tomorrow. If you started out with , the max you can consume today is (if you decide nothing to consume tomorrow), and the max you can consume tomorrow is also (if you decide nothing to consume today).

So, how does my choice problem look like graphically? The budget constraint equation is , and the slope is -1.

So, if your indifference curve is of the shape as above, you just choose any point like A on the budget constraint. Of course you can’t do any better than that, as there is just no access to any other technology – neither in terms of investment/production opportunities, nor in form of a bank/capital markets.

…

*Case 2: Capital Markets but No Production Opportunities*

Let’s complicate things a little bit and introduce a bank in the system. That is, now you have the option to not just put your money in the trunk, but also to put it in bank that will earn you an interest at the rate of \%$$. That is, for every unit of Rupee you put in the bank today, you get tomorrow.

So, in our previous example, if you still decide to consume today, now instead you can put that remaining in the bank and get tomorrow. That is, you can consume more tomorrow.

So, what changes? Well, nothing but the budget constraint, because the tradeoff that was earlier -on- (you put unit in the trunk today, and that allows you to consume unit extra tomorrow) has now become -on- – that is, for every unit of your income kept aside today you get a little more because the bank pays you interest.

And that extra amount is . So, the budget constraint now rotates around the x-axis, to reach a higher point on the y-axis. That is, the slope which was earlier now is now .

The budget constraint equation now becomes , and we are clearly better of:

That is, while earlier we could only reach the utility curve , now, because of the existence of the capital markets, we are able to move to the higher utility curve .

…

*Case 3: Production Opportunities but no Capital Markets*

Now, this is a slightly less familiar case of having no bank, but having production opportunities. That is, we also have the option to invest our money now in some venture (instead of a bank).

The problem, however, is that while the production function as you know from your microeconomics course was in the production plane, our choice problem is that of *consumption*, so we have to first move from the production plane to the consumption plane.

We can do this because we are investing all what we are not consuming today. So, if we consume today, we invest in the production opportunity. And this is the connection that allows us to bring what is there in the production plane onto the consumption plane.

We can do so by taking a mirror image of our production function with respect to the y-axis (i.e. the output ) and we end up with a production function in the consumption plane. (You wanna try this now on your own?)

Because the marginal rate of return from production opportunities is initially high, as long as it is more than our bank rate of return in the beginning stages of investment, we can reach even a higher consumption tomorrow as compared to what was possible in a world where there was only a bank. That is, now, given the technology of production, we can reach a maximum point instead of just on the y-axis (tomorrow). And this is how it looks like:

That is, now we are able to move from a new (higher) tangency point .

(Although one can’t make direct comparisons with the capital markets case, as there is no capital market here, given that a production opportunity potentially offers a higher rate of return than a bank, we may think of it as being better off compared to putting money in a bank ‘had there been one’ where our utility was .)

…

*Case 4: Production Opportunities in Presence of Capital Markets*

Ok. Let’s put it all together now. We bring our Case 2 and Case 3 together and get:

So, while we could reach the utility curve in Case 3, here we area able to do a little bit better. We are able to move to a higher utility curve at . And how does this come about?

Again, as always we break the problem in parts. Tackle it step by step, right. We have the following two decisions to make to solve our consumption problem:

- How much to put in production opportunities? (Case 3)
- Of the amount left over, how much to put in the bank (Case 2), and how much to consume today (our subjective preferences)?

We start with the first one.

…

*The Production Decision*

We have two places to park our money in. We can either put it in the bank or in production opportunities.

That is, starting at , our current wealth, we can either do what we did in Case 2 and go along the schedule, or we can go along schedule? Which one should we choose?

You know enough microeconomics by now to understand that we should keep investing in one or the other until at the margin we are getting the same return from both. (If you don’t understand this you are on very shaky grounds – time to pick up your micro text.)

We can see that to the right of point , the marginal rate of return from production opportunities is more than the bank return: slope of the production opportunity curve is greater than , i.e. . And vice-versa for the left of point .

So, we should go along the path of and invest in production opportunities till we reach point . After that because the slope of the production function is less than the bank rate of return, we should not invest in production opportunities. Because at that point it makes more sense to put the money in the bank rather than put in the production opportunities. So, the long and short of the argument is that whatever one’s subjective preferences the *maximum* we should invest in production opportunities is till we reach point , i.e. the amount .

But what about consumption? Isn’t that our real decision problem?

…

*Borrowing Against Future Income: Present Value
*

Before we address this question, we need a concept which I am sure none of need to be taught about. Formally speaking, this is what is called the idea of **Present Value (PV) **in finance.

If the bank gives you a rate of return , i.e. for every one unit of money you put in the bank you get . The bank not only allows you to deposit money, but also to borrow from it. So if you borrow from the bank today, you will have to return tomorrow. Let’s now pose the question: how much do you have to borrow today to *return * tomorrow?

Well, it’s a simple high school problem. You just scale today’s borrowing by a factor of , i.e. you just borrow , and when the time comes (tomorrow) return .

We say, then, that the **PV**, of is . That is, each unit tomorrow is worth a little less today, and that amount is . An entrepreneur understands this all too well. PV is just an economist’s way of telling a nit’s proverbial refrain:

A bird in the hand is worth two in the bush

To summarize, if we are expecting unit tomorrow, existence of a bank allows us to borrow it’s PV today, which is . When the time comes, we can use our expected income to return the unit we owe to the bank. This is called borrowing against future income.

…

*The Consumption Decision*

Ok, to our consumption decision problem. Choosing the production level, or the investment amount, at , gives us tomorrow (as in the graph above).

Let’s use what we just learnt. Assuming we are living in a world without any uncertainties, we *know* that we are going to get if we invest today. So, we can now go to the bank and borrow against this future income of . And how much will the bank give us for that?

Well, that’s simple. Because the PV of unit today is , is worth today, and that’s exactly what the bank would be willing to offer us. That is, a bank allows us to increase our today’s consumption by borrowing against tomorrow’s earnings – the PV of , i.e. .

Remember that is the point where the marginal rate of transformation (MRT) from production just matches the ‘opportunity cost’ of putting that money in the bank. And as we argued, that is the *maximum* amount of money we should be investing in production opportunities. Any more than that and we are choosing an inferior solution to our production decision problem.

So, the total consumption possible today at the point of optimum production is:

You are welcome to check, but is the farthest point we can reach on the x-axis. That is, any other point on the locus of the production function results in a solution that will be to the left of (or inferior to) .

That is by starting out with the wealth we have invested which gives tomorrow, whose PV is just . That is, the maximum possible consumption today is .

…

*Net Present Value*

This is now time to introduce another important concept in finance.

How much did we invest? We invested in our production opportunity. And what did we get out of it ? invested today gave us tomorrow. The PV of the income from investment tomorrow is .

The difference between our investment outlay () and the PV of our income from the investment is called the **Net Present Value**, or by its abbreviation, as **NPV**. And our NPV with investment of is:

Again, you can show that any other level of production / investment results in a lower NPV. That is, **investment at a point where MRT = maximises the NPV**.

…

*The Last Step: Back to the Consumption Problem*

But what if one wants to consume the same amount of money today that one was consuming when there was no bank. That is the amount as in Case 3. Well, in presence of a bank one can easily do that. How? Simple – by borrowing against future income. In fact, why just , with a bank one can actually attain *any* level of consumption today up to . And how is that?

Well, one doesn’t have to borrow against all of the future income , right? One can borrow any fraction of the future income. So, if one wants to consume tomorrow, one can borrow the PV of , which as you are welcome to check is exactly . This means that the consumption today is – again, exactly as wanted.

Since choice of is arbitrary, to generalize, by borrowing part/full against one’s future income one can reach any point on the line.

Of course, one can also do the other way around. If we don’t want to consume today, we can *postpone* our consumption by *lending* – by depositing the money in the bank. Again, the same logic applies.

Remember, we still invest till the point . And if we want to consume less than today, we put the remaining money in the bank. (To the left of , the marginal return from the production opportunity is less than the rate of return from the bank, so it doesn’t make sense to invest if we are to the left of .)

This gives us the following Fisher Separation Theorem:

In presence of productive opportunities and capital markets, all consumers should choose the investment opportunities that maximises their net present value (the farthest point on abscissa: ) irrespective of their individual subjective preferences. Having selected the level of investment that maximises their net present value (wealth), they should then borrow from / lend to the bank depending on how they want to plan (smooth) their inter-temporal (in between times) consumption.

…

**But what happens at the margin: What if we only have a single investment opportunity – as for example, we did in the class?
**

So far we have considered a continuous production *function* – that is, we implicitly assume there is a continuous set of (or infinite) investment opportunities. We then argued that the NPV is maximised at the level of investments where MRT = .

But what happens when we have only a single investment opportunity? What is the optimal investment decision criterion in that case? So what we are essentially saying is that instead of a production function schedule/curve, we have just a point.

Let’s consider two different cases. First is the case where the available investment opportunity, , lies to the left of the “bank line”, i.e. as below:

*(Click on the graph to zoom)*

The second case we consider when it’s the other way around – that is the investment opportunity lies to the right of the “bank line”, i.e. as below:

*(Click on the graph to zoom)*

It should be clear to you that the opportunity is inferior to opportunity . How? As earlier, just draw the ‘tangency line’ on the available investment opportunity ‘set’. Of course, in this case it would be trivial and would just translate into drawing a line parallel to the “bank line” passing through the points and . What do we get? Let’s see:

*(Click on the graph to zoom)*

So, if we choose investment , we consume today, invest and get tomorrow, and the total PV is . On the other hand if we choose investment , we consume today, invest and get tomorrow, and the total PV is . That is, the NPV from investment in and respectively are:

That is, if we choose investment we end up with a PV less than our original wealth and that PV is . On the other hand, if we choose investment we end up with a PV which is more than and that PV is . Investments of the kind with NPV < 0 are clearly not desirable then – we end up with wealth less than what we start out with.

So if have only finite number of investment opportunities to choose from we should only **choose** **the ones that have an NPV > 0**.

This is indeed a more realistic situation. Typically a manager would only have select investment opportunities available. In that case, the NPV maximisation rule boils down to a very simple criterion:

Select all investment opportunities that have NPV > 0.

…

**Assumptions in ‘proving’ the Separation Theorem**

- No uncertainty
- All consumers have the same information
- There are no transaction costs
- One can borrow and lend at the same rate of interest
- Capital markets are
*complete*(we’ll see what that exactly means when we study the notion of market efficiency)

To lay the cards on the table, the theorem doesn’t work *exactly* *(or as neatly) *when there are transaction costs. Again, assumptions do not always test a theory, its predictions do. And clearly, when managers and entrepreneurs plan their investments and financing decisions their intuition is not too far from the Separation Theorem. And that’s why you should understand what it means and where is it coming from.

Even though we ‘proved’ the theorem in a certain world, it turns out it holds even in the uncertain world if instead of a certain payoff you are willing to substitute its certainty equivalent in the problem. But that’s more than you should worry about at this stage – unless you are one of the two Finance area FPMs in the class :)

…

**Suggested Reading**

Jack Hirshleifer, *Investment Decision Criteria*, UCLA Working Paper 365, March, 1985. Available here.

This is as simple a technical note as you can find on the Separation Theorem and it’s written by a pioneer in the discipline. (Assumes a decent understanding of basic microeconomics though.)

## [PGP-I FM] The Present Value Rule

In our discussion today we considered the following three cases:

*No Bank, No Investment Opportunity:*In absence of a bank, the sum of consumption in both time periods must be equal to the total available wealth. What one consumes is an individual decision, but sum of what is consumed ‘today’ and ‘tomorrow’ must equal total wealth (in our example, Rs. 5000). That is,*(Consumption Today) + (Consumption Tomorrow) = 5000**Only Bank, No Investment Opportunity:*In this case we saw that an impatient person could consume all of Rs. 5000 today, and a patient one one could get Rs. 5500 tomorrow. Meaning that, when there is a bank, there is a ‘reward for waiting’, in that the impatient person can consume an extra Rs. 500 tomorrow. We also argued that in this case it doesn’t make sense to simply add money today and tomorrow, because there is an exchange happening between today and tomorrow and only after adjusting for exchange rate can we compare monies at two different time points. So in this case we would have*(1 + Interest Rate) * (Consumption Today) + (Consumption Tomorrow) = 5500*. This is one of the most important lessons in finance – that one can’t simply add absolute value of money today and tomorrow, one needs to adjust for interest rate.*Both Bank and an Investment Opportunity:*Here the lesson was that as long as Net Present Value of the investment opportunity is positive both the patient and the impatient ones should invest and use bank to plan consumption based on their preferences. This is what is called Fisher Separation.

…

**Fisher Separation Theorem**

Let’s first state the theorem in words:

Given perfect and complete capital markets, the production decision is governed solely by an objective market criterion (represented by maximizing wealth) without regard to individual subjective preferences that enter into their consumption decision.

That is, the consumer’s problem of determining the optimal level of investment and optimal consumption stream can be separated in two parts:

- First we choose the investment level that maximizes our wealth. This choice is independent of one’s preferences.
- And then select the consumption stream based on the maximized wealth

This is what allows managers to invest in projects on their own merit irrespective of the individual shareholders’ preferences. And that happens because existence of capital markets allow shareholders to plan their consumption according to their preferences.

Let’s recap the example we did in the class.

We are given the following:

- Wealth:
- Investment opportunity:
- Interest rate:

Graphically, we ended up with a figure like this:

*(Click on the graph to zoom.)*

For a patient person, the investment opportunities allow him to transform into , and by putting the remaining into the bank he ends up at the end of the period.

For an impatient person there is an extra step of going to the bank to borrow against the promised after the investment, but she also ends up richer. How? Out of the that she has, she gives to her friend for the business, and on her friend’s credibility goes to the bank and borrows against the that she knows she will get from the business (which she can then return to the bank). So, in total she ends up with: .

So both our richer by and respectively. But then, as the graph above shows, they are equivalent. Being richer tomorrow by is equivalent to being richer by today.

Of course, we could have also found this by completely ignoring the preferences, and simply considering investment in its own right:

So, if we apply the NPV rule, we should accept this opportunity, and then plan our consumption anywhere along line in the graph above by borrowing from (or lending to) the bank.

…

What follows is more than what we did in the class, and can be safely ignored. For those interested, carry on.

In general, if a consumer earns in both periods (let’s say a salary of and ) and also consumes in both periods (let’s say amount and ) then we have the following identity:

which is simply another way of saying that:

As you can imagine, this generally holds for all kinds of income/consumption patterns, and not just over two years. Loosely speaking, ala physics you can think of it as a kind of a ‘conservation equation’.

…

*The Consumption Decision*

We may also use the above equation to find our consumption pattern given our income in both periods.

As an example for the numbers we did in the class, we can think of it as, today and tomorrow (one of the cases that we considered in the class, as those awake would recall -:) ). Then if we want to consume today, then we can find out the consumption tomorrow as:

And you can check that indeed equals .

…

Next we’ll spend time learning the mechanics of time value of money – the Chapter 2 of the book essentially. Most of it relies on knowing your high school simple and compound interest, and summing up simple geometric series. Try and also recall how the Euler’s number, , comes about as the limit .

We’ll just put it all together in the context of valuing certain future cash flows.

…

## [WP] Open access temptations

Information science isn’t exactly my area of research, but it has been fun exploring the shady side of open access. Here is the abstract:

Backlash against “megapublishers” which began in mathematics a decade ago has led to an exponential growth in open access journals. Their increasing numbers and popularity notwithstanding, there is evidence that not all open access journals are legitimate. The nature of the “gold open access” business model and increasing prevalence of “publish or perish” culture in academia has given rise to a dark under-belly in the world of scientific publishing which feeds off academics’ professional needs. Many such “predatory” publishers and journals not only seem to originate out of India but also seem to have been patronized by academics in the country. This article is a cautionary note to early-career academics and administrators in India to be wary of this “wild west” of the internet and exercise due discretion when considering/evaluating open-access journals for scholarly contributions.

The working paper is here.