[XE] ET 8132

Across

6 – Interplanetary: Penalty + terrain [Anag.]
9 – Stigma: It’s + GM (genetically modified) + a
10 – Knitwear: Winter + a + K
11 – Acid rain: A + CI (channel islands) + drain (soakaway)
13 – Emetic: NursE + METICulously
15 – Leaves: Abandons = Leaves = Tea
17 – Cyborg: Cy (extremes of CannY) + Borg (tennis player)
19 – Smoker: SR (sister) + moke (donkey)
20 – Teddy boy: Tobey + dyed [Anag.]
22 – Riparian: Pair + rain [Anag.]
23 – Locate: Le (middle of AngLEsey) + o (old) + cat (tom)
26 – Strong language: Outre + gang + slang [Anag.]

Down

1 – Fifth columnist: Fist (hand) + th (the short) + column (article)
2 – Stag: Guns = Gats (upside down) = Monarch of the glen (painting on stag)
3 – Errata: Era (time) + Rat (traitor)
4 – Entirely: ENT + I + rely
5 – Stow: Sow (female pig) + t (time)
7 – Liking: ?
8 – Roaring forties: Shouting = Roaring + for + ties (matches)
12 – Drank: D (director) + ran (dashed) + K
14 – Ebony: Boyne [Anag.]
16 – Earnings: Earings (jewellery) + n (new)
18 – Stanza: interestS + TANZAnian
21 – Deluge: Dee (river) + Lug (?)
23 – Aloe: Aloe + Vera = soothing extract
25 – Chap: ChurCH + APpeal

Filtered historical simulation

Historical simulation (HS) has been one of the most popular ways of measuring Value at Risk (VaR) in financial institutions. Originally popularized by JP Morgan’s RiskMetrics document and then picked up by the Basel Committee on Banking Supervision (BCBS), the idea is grounded in the belief that knowing history is a good starting point for understanding what lies head. In that view, a histogram estimated based on data over time can be used to calculate VaR for tomorrow or the day after. Of course, put this way, the approach does not seem very satisfactory. Knowing what all has happened over time is not quite the same as knowing what all can happen at a point in time in the future, but like it or not, that’s how such methods roll. Those who like HS argue that at least it is better than assuming a Normal distribution.

Anyways, that’s not the issue practitioners had with using HS. What they didn’t like was the in-built notion in HS that all history was equally important. So if you asked from HS what would be the 5% VaR on the HDFC stock tomorrow, it would say, well, just order the returns in the ascending order and pick the 5th percentile. Or maybe, what is more visually appealing, just pick it off its histogram. And it is a rather good idea. In fact, that is the sort of thing one usually does if one had to, say, find the height of the 5th shortest kid out of the 100 in one’s housing society.

While this approach works for finding the kid with the 5th shortest height kid, it doesn’t quite work so well for finding the 5% VaR. What’s the problem you say? Like with many other things in finance, it has to do with time and memory. When the only data you have is from the past, some of that could be from a very very long time ago. And as times change, so do the rules of the game and technology and what not. So the importance of value of a stock return calculated in early 2010s is not quite the same as a return calculated yesterday. After all, if the objective is to find the risk in holding the HDFC stock over the next day or week, surely what happened last week or last quarter carries more importance than what took place 10 years ago.

Another problem is that relying only on past data to form opinions about the future assumes that financial markets have no memory, that is, it is as if all returns are completely independent. Often when financial markets go crazy or get hit by once in a generation virus, volatility stays high and stocks fall for days at a stretch. Unfortunately, the fact this happens sort of throws many assumptions underlying HS out of the park.

That the recent past should be more important and that memory matters for measuring future risks, of course, everybody knows that. And to be fair it’s not that finance folks didn’t realize these things. But VaR itself only took off around the early 1990s, and it wasn’t taken all that seriously back then. I mean, statisticians and risk managers were hardly in demand in the early 1990s – well, not like now in any case. Anyways, over time, folks who were interested in HS took these problems seriously, and figured they had to fix it. And different people came up different ways to address the issue.

Weighted historical simulation

The attempts at modifying HS so that it relied more on recent data broadly came in three avatars:

  1. Use a window of only very recent data: Obvious and easy-peasy but then two things happen: i) the sample size goes down, and ii) if something relevant, like a big systemic crisis, did happen some time ago it would go missing from your history pretty soon (as time passes that event would no longer part of the recent window).
  2. Weight data by age: This again is sort of obvious. I mean if you’d rather that more weight be given to recent data, well, just do that. Boudoukh, Richardson and Whitelaw were one of the first set of guys to flesh out this idea and showed how to apply it in practice in an article in Risk magazine in 1998. They said, let’s decide on some number \lambda between 0 and 1, perhaps close to 1, and multiply historical observations with powers of. So, yesterday’s return could be multiplied by \lambda and the one before by \lambda^2 and so on. That was the basic idea and so now you could use all historical data, and if something happened long ago it wouldn’t matter much anymore, unless the things that happened were like really major. One needs to do a bit more work to identify the VaR, but the idea is intuitive.
  3. Weight data by volatility: Around the same time when Boudoukh and others were thinking about modifying HS by using weights, some folks got the idea to think about weights a bit differently. They said, look, returns are already scaled so it shouldn’t matter that much which epoch they are from as long as they are not from an era super old, but the memory effect needs to be taken more seriously. If the whole method of HS goes for a sixer if returns over days are not independent then that’s really bad news. Since this was an important development historically, this probably deserves a separate section to talk about it.

Volatility weighted historical simulation

As it often goes with such things, the idea of applying volatility weights also came up around the same time. In 1998, Hull and White noticed that if they divided past returns with the volatility at that time, the memory effect disappeared to a large extent. So with volatility weighted data, HS could be salvaged and applied sensibly. But that’s awkward now, isn’t it? They got rid of the memory effect, but that meant taking away the affect of probably the most important input for measuring risk, that is volatility. To bring volatility back, they came up with a hack.

They argued that, look, the objective is to find VaR over the next few days so then what better volatility to use then the most recent one available. And the fact that volatility is persistent is great for us, meaning one could use today’s volatility and it would still be a very good proxy for volatility tomorrow. One needs to figure out how to measure volatility properly, but that’s hardly a bottleneck. Thanks to Robert Engle and friends, we have had that technology since the 1980s. For the task at hand one could use any of the many time series based models of volatility, EWMA, GARCH, EGARCH, you name it. In fact, it turns out that the choice of the volatility model does not even matter that much as long as it’s kind of reasonable.

If r_t is the series of historical returns, Hull-White suggested applying HS to volatility weighted returns series as: r_t = r_t \times \hat{\sigma}_T/\hat{\sigma}_t , where \hat{\sigma}_t is the estimated volatility at time t from, say, a GARCH model and \hat{\sigma}_T is the today’s (latest) estimate of volatility. So implementing volatility weighted HS (WHS) VaR requires two additional steps: i) dividing the returns by estimated volatility, and ii) multiplying by the latest estimate of volatility.

Taking an example for the HDFC stock (sample period 2006 – 2019), as of December 31, 2019, 1-day HS VaR at 5% is -3.14%, and corresponding WHS VaR estimates using EWMA and GARCH are -1.85% and  -2.01% respectively. So which one is the best/most reliable? Well, that’s where the things get a bit tricky. Going by the numbers for the HDFC example (and this is sort of the case in general), there is no secular trend, so we can’t say for sure if HS VaR is always less/more than the WHS VaR (for now ignoring the matter of point estimates versus confidence intervals). Unfortunately, that’s how often such things are. For now, we will not get into this and move on and try to see where does filtered historical simulation fits into all of this.

Filtered historical simulation

Despite the attractiveness of WHS over HS in i) sampling from approximately uncorrelated data, and ii) giving more emphasis to the current volatility, practitioners still found it lacking in two ways. One, there was no systematic way of forecasting VaR beyond a day (short of assuming square root scaling which is strictly only applicable for IID Normal variates) and two, it wasn’t obvious how to extend the idea to portfolios or derivatives.

The variant of WHS that claims to do both, and which has become particularly popular in the last few years, is called filtered historical simulation (FHS). Introduced by Giovanni Barone-Adesi in a series of papers around 1999, FHS applies volatility weighting in exactly the same way as EWMA/GARCH WHS does (FHS calls the rescaling step as ‘filtering’), but it goes a step further and casts the exercise within an ARMA/GARCH framework. This allows for forecasting returns as well as volatility, and consequently VaR, within a formal time series model which is sort of internally consistent.

The second modification in FHS is its adapting Bradley Efron’s idea (see chs. 7 to 9) of applying bootstrap on a strip of contemporaneous data to simulate a vector of returns. The two are combined to simulate the distribution of filtered returns at any point in the future. Applying the idea to the HDFC stock, 1-day 5% FHS VaR using EWMA and GARCH come out to be -1.79% and -2.07% respectively.

That the FHS and WHS estimates are so close is not altogether unexpected. The only differences between WHS and FHS are really the use of bootstrap to find the quantile and forecasting volatility for 1 day ahead in the latter, both of which do not amount to much for finding 1 day VaR. The claimed attractiveness of FHS over WHS lies in i) its ability to forecast VaR beyond a day, and ii) finding VaR for portfolios and derivatives without relying on correlations.

One of the practical difficulties in forecasting VaR for portfolios and derivatives has been instability/unavailability of correlations. The only model free method really available for linear portfolios was HS, which one could use to find VaR based on historical portfolio returns. Other parametric methods all require correlations. There is unfortunately no easy way to apply WHS either, short of using a complicated and unstable multivariate GARCH models which again depend on correlations.

The fact that FHS applies bootstrap on filtered data provides a way out. FHS uses sampling with replacement on standardized residuals to create a distribution of returns. It turns out that this idea can be extended to portfolios as long as we can sample strips of standardized residuals for different assets at the same time. This seems like a neat hack, as prima facie one would expect sampling residuals for assets together would respect dependence between standardized residuals.

The rest of the algorithm for forecasting remains the same as described above. Having obtained the path of returns and volatility, the portfolio VaR can then be calculated in the usual way. Once joint returns are simulated, they can be used to create joint price paths for valuing derivatives.

FHS with AR(1)/GARCH(1, 1) (wonkish)

To illustrate how FHS is implemented, let’s assume that an AR(1)/GARCH(1, 1) model has already been fitted to the returns series \{r_t\}.

\begin{aligned} r_t &= \mu + \phi r_{t - 1} + \epsilon_t \\ \epsilon_t &= \sigma_t \nu_t \\ \sigma_t^2 &= \omega + \alpha \epsilon_{t - 1}^2 + \beta \sigma_{t - 1}^2 \end{aligned}

The distribution for r_{T + 1} is generated by applying bootstrap on estimated standardized residuals \hat{\nu}_t = \hat{\epsilon}_t/\hat{\sigma}_t. Denoting the first such bootstrapped sample by \{\hat{\nu}_{T + 1}\}, the next step is multiplying bootstrapped \{\hat{\nu}_{T + 1}\} with the forecast of \hat{\sigma}_{T + 1}, as:

\begin{aligned}  \{\hat{r}_{T + 1}\} &= \mu + \phi r_T + \{\hat{\epsilon}_{T + 1}\} \\ \{\hat{\epsilon}_{T + 1}\} &= \{\hat{\nu}_{T + 1}\} \times \hat{\sigma}_{T + 1} \\ \hat{\sigma}_{T + 1}^2 &= \omega + \alpha \epsilon_{T}^2 + \beta \sigma_{T}^2 \end{aligned}

The algorithm for generating a distribution for r_{T + 2} and beyond proceeds similarly, except that now there are as many possible values for \{\hat{\sigma}_{T + 2}\} as bootstrapped \{\hat{\epsilon}_{T + 1}\}.

\begin{aligned} \{\hat{r}_{T + 2}\} &= \mu + \phi \{r_{T + 1}\} + \{\hat{\epsilon}_{T + 2}\} \\ \{\hat{\epsilon}_{T + 2}\} &= \{\hat{\nu}_{T + 2}\} \times \{\hat{\sigma}_{T + 2}\} \\ \{\hat{\sigma}_{T + 2}^{2}\} &= \omega + \alpha \{\epsilon_{T + 1}^2\} + \beta \sigma_{T + 1}^2 \end{aligned}

In terms of multiplication of arrays, the product \{\hat{\nu}_{T + 2}\} \times \{\hat{\sigma}_{T + 2}\} is to interpreted in the sense of element-wise multiplication (\{\hat{\nu}_{T + 2}\} represents a separate bootstrap from \{\hat{\nu}_{T + 1}\}). Once \{\hat{r}_{T + 1}\} and \{\hat{r}_{T + 2}\} are generated, VaR at T + 2 at any percentile, say, 5% can be picked off as the 5% percentile of \{\hat{r}_{T + 2}\}.

Assuming that returns data is stored in the first column of an R dataframe/xts obect called returns, the following implements a 1-day 5% HS, WHS and FHS VaR estimates using ARMA(0, 0)/GARCH(1, 1) model (using the rugarch package).

hs <- quantile(returns[, 1], 0.05)[[1]]    
specGarch <- ugarchspec(mean.model=list(armaOrder=c(0,0), include.mean=FALSE)), 
                        distribution = 'std')
fitGarch <- ugarchfit(specGarch, returns[, 1], rec.init = 'all')
returns$volGarch <- sigma(fitGarch)
returns$scaledGarch <- returns[, 1]/returns$volGarch
whsGarch <- quantile(returns$scaledGarch, 0.05)[[1]] * 
		        as.numeric(tail(returns$volGarch, 1))
bootGarch <- sample(returns$scaledGarch, num, replace = TRUE)
fhsGarch <- quantile(bootGarch, 0.05)[[1]] * 
		        as.numeric(sigma(ugarchforecast(fitGarch, n.ahead = 1)))    

Installing QuantLib 1.18 and QuantLib-Python

A few years ago I had blogged about steps for installing QuantLib (QL) and QL-Python in Windows. Back then QL release was version 1.6, and since that time not only has the library expanded significantly in its scope (version 1.18 was released in March, 2020), it has also become much easier to install across platforms. For example, one can now install a not-so-old QL version 1.12 directly from Ubuntu 18.04 LTS repositories (Dirk Eddelbuettel’s PPA provides another alternative for more recent builds).

[Update (15/5/20)]: Luigi Ballabio shares that Python users may now install precompiled QL-Python directly from pip, requiring neither base (C++) QuantLib build, nor any other additional steps:

 
pip3 install --user QuantLib

This will work for those who are happy with what is already exposed in QL-Python and do not foresee the need to modify the wrappers. However, if one would like to play around with the library and perhaps even add to it, building from source probably is the way to go.]

For installing from a released version, QL official installation page provides detailed instructions and work perfectly fine, but they correspond to somewhat older versions, and installing QL-Python in Windows still seems to be difficult at times. I thought I’d update my earlier post and consolidate the steps for version 1.18, and also provide screenshots for installing QL and QL-Python in Windows 10 for beginners using the free Visual Studio Community 2019.

QuantLib-1.18 and QuantLib-Python in Ubuntu

Remove older installations

If any older version of QL/QL-Python already exist on the computer, it may be a good idea to get rid of them before a fresh installation, and maybe uninstall QL-Python before uninstalling QL. First identify where the Python module is located and remove the egg. When uninstalling QL, first do a sudo make uninstall from the directory where it was ‘made’ (let’s call it $QL_SOURCE_DIR) and maybe also remove any remaining old libQuantLib objects.

 cd /usr/local/lib/python3.6/dist-packages/
sudo rm -rf QuantLib-*.egg
sudo rm easy-install.pth
cd $QL_SOURCE_DIR
sudo make uninstall
cd /usr/local/lib
sudo rm libQuantLib*

Fresh installation

 tar xzf QuantLib-1.18.tar.gz 
cd QuantLib-1.18 
/.configure --enable-intraday 
make 
sudo make install 
sudo ld config 

This should install QL-1.18 and then one can compile the examples on the official page to ensure that everything works as it is supposed to (Boost is a prerequisite and can be installed as sudo apt install libboost-all-dev). The --enable-intraday flag is only required if RQuantLib is also going to be installed. For other options see ./configure --help.

The following steps should install the companion version of QuantLib-Python module. The path to Python interpreter would depend on the system installation in use. Either way, it is recommended to use python3 and tell ./configure where to look for it.

 tar xzf QuantLib-SWIG-1.18.tar.gz
cd QuantLib-SWIG-1.18
./configure PYTHON=/usr/bin/python3
make -C Python
sudo make -C Python install

QuantLib-1.18 and QuantLib-Python in Windows 10

I found it a lot easier installing QL on Windows today than five years ago. The instructions on the official pages are quite complete and self explanatory, and the only thing I do is describe the steps when installing with the latest free Visual Studio and add new screenshots. The prerequisites first.

  1. Install Visual Studio Community 2019 (VSC). It requires having a Microsoft user account, but is free otherwise.
  2. Install Boost from the binary installer (boost_1_73_0-msvc-14.2-64) available from sourceforge (I have only checked with version *-14.2-64)
  3. Install CMake and Swig (both optional)
  4. Get QuantLib-1.18 and QuantLib-SWIG-1.18

In what follows, it is assumed that Boost has been installed in C:\local\boost_1_73_0 and QuantLib folders are located at C:\QuantLib-1.18 and C:\QuantLib-SWIG-1.18.

QuantLib using VSC

  1. Open the QuantLib.sln in C:\QuantLib-1.18 using VSC: The Solutions Explorer tab would prompt for installing extra components (screenshot 1). Follow the steps and install the Desktop development with C++ workload (screenshot 2). This may take a few minutes to an hour depending on the machine, so maybe time to get a beverage.
  2. Once the Desktop development with C++ workload is installed, go to the Property Manager (screenshot 3), choose the Release x64 configuration from the toolbar (important) and select all 21 projects (Ctrl-right click) and add path to boost in VC++ Directories (screenshot 4)
  3. Move back to the Solutions Explorer tab, and link path to boost library in testsuite (screenshot 5)
  4. Build solution (F7; screenshot 6). It will show some harmless warnings on declaration of Bind placeholders which may be ignored (screenshot 7). If all goes well, the build should succeed (screenshot 8).
  5. To check QL is working, create a new project (screenshot 9), select Console App (screenshot 10), click Next and give it a name of your choice (say, TestQL1).
  6. Check that QL has been installed correctly: Copy the example test script from the official page to TestQL1.cpp, select solution properties (screenshot 11) and add paths to QL and boost to VC++ Directories (screenshot 12) and the Linker (screenshot 13). Running the program (F5) should produce the output as in screenshot 14.

QuantLib-Python using VSC

    1. It is not longer necessary to have Anaconda installed to have easy access to Python tools. With the heavy lifting already done by VSC, Python and its packages can be installed from within VSC itself (this is not to say it is superior to Anaconda, but it’s not necessary to have it installed). Install Python development tools using the search bar (Ctrl-Q) see screenshots 15 and 16). After this step is complete, optionally one can also install Anaconda Python from within VSC (see screenshots 17 and 18; not done here).
    2. Add Python path to environment variables: With VSC it’ll likely be C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64, but it may depend on one’s installation. It is important that this be set correctly.
    3. Launch x64 Native Tools Command Prompt (screenshots 19): The default VS developer command prompt may not work because as folks at Microsoft say, “The default build architecture uses 32-bit, x86-hosted tools to build 32-bit, x86-native Windows code. However, you probably have a 64-bit computer. When Visual Studio is installed on a 64-bit Windows operating system, additional developer command prompt shortcuts for the 64-bit, x64-hosted native and cross compilers are available. You can take advantage of the processor and memory space available to 64-bit code by using the 64-bit, x64-hosted toolset when you build code for x86, x64, or ARM processors.”
    4. Rest is as it says on the official installation page, and reproduced below for version 1.18 for sake of completion (see screenshots 19, 20, 21 and 22 for output after build, test and install respectively). After the installation is complete, one can go back to VSC, launch the Python environment and check that QL is available for use (screenshot 23).
 cd C:\QuantLib-SWIG-1.18\Python
set QL_DIR=C:\QuantLib-1.18
set INCLUDE=C:\local\boost_1_73_0
python setup.py build
python setup.py test
python setup.py install