Research Post

FRTB (…they're not kidding about the 'fundamental' bit)

FRTB (…they're not kidding about the 'fundamental' bit)

23 November 2015

While the final details and rules of FRTB are not yet finalised, much of the architectural and procedural impact can be assessed.

This is the first in a series of articles discussing the impact of the Fundamental Review of the Trading Book regulation (FRTB) on the risk technology landscape. While the final details and rules of FRTB are not yet finalised, much of the architectural and procedural impact can be assessed.

VAR TO ES

The following is an assessment of some of the changes implied by moving from Value at Risk (VaR) to Expected Shortfall (ES). Firstly, a quick explanation…

Broadly speaking this is a statistical shift, moving from a single point of reporting at a confidence level across a given time to an aggregate of all the points above that. In a real world example, for a one-year historic simulation with a one-day time horizon we can assume ~250 data points. A 99% VaR would be the 247th worst result. The 97.5% ES equivalent would be the mean average of all results from 244-250. It's intended to capture fat tails.

With that out of the way, there are a couple of major ways this affects risk technology systems. The first is in volume of data. It's not just vanilla ES that's required, but Stressed ES will replace Stressed VaR, and there's a new dynamic set of liquidity horizons that will will need to be applied. I've seen estimates from three-times increased volume of calculations up to 30!

All of which points to the fact that calculation and storage of all the new data will be a challenge. That challenge will be discussed in a follow-on piece. It is key to first understand and explain what that new data encompasses.

ES EXPLAIN

VaR is already not a particularly clean metric for analysis and explanation. It can’t be directly aggregated (the sum of your 99/1 VaR at lower hierarchy levels is not equal to your 99/1 VaR at a higher level). It's not immediately tradable (you can't make just one trade to decrease your VaR as you could, say, for FX risk). You have to drill into the scenario details to work out what market data has contributed to the exposure. As many a Risk Manager will know, it's rarely a fun topic of discussion with traders, even if you have an effective analysis tool.

ES is even more obfuscated. Effectively, rather than just one scenario to understand, you have to weigh the impact of five or six and drill into them (or, extract their underlying info, combine and re-analyse). It's subject to the volatility of the ultimate extreme of the curve (which, to be fair, is part of the point) but as a result your process is highly susceptible to an errored calculation – which would otherwise have been ignored. Again, this is part of the point, but it will certainly slow down daily reporting until the process is streamlined.

Now, given your processes and architecture are aligned to BCBS 239 principles then you should have a simple tool to explain and trace your data lineage and this can be reused here. On the off-hand assumption that you don't have that – a radical thought, I know – we need to look at alternatives.

ANALYSIS VERSUS REPORTING

...And this is where challenge really heats up. There are quite a lot of data analysis and reporting tools on the market. They range from big enterprise reporting solutions such as Oracle BI and Cognos to the more flexible analysis tools Tableau and QlikView. The trouble is the compromise.

OBI and Cognos really require an IT team to set-up, configure and administer reports – at least if your data model is complex – and as a result are not great for fast-changing analysis. Frankly some of them struggle with basic banking stables such as hierarchical reporting. Self-service, user-friendly tools including QlikView and Tableau are great... as long as you only ever want to view your data in those tools. Getting data out for reporting to management is irritatingly hard. I've seen some solutions claim to be able to bridge this analysis/reporting gap (e.g. MicroStrategy) but even they require a huge investment in environment configuration.

BUILD VERSUS BUY

Many organisations have been shying away from building their own reporting solutions for a long time now, and rightly so – the build and maintenance costs can be exorbitant. But the compromised nature of market solutions, the substantial increase in complexity inherent with FRTB and the traceability required by BCBS 239 is causing some G-SIBs and D-SIBs to reconsider.

HTML5-based, browser-delivered GUIs are pretty easy to knock up with a small team of skilled technologists who can tailor it to a complex data model and respond rapidly to urgent reporting requirements. The amount of code required for such an interface is far less than in the bad old days of thick clients on desktops, minimising maintenance costs, and the open nature of the technology decreases the risk of building down a (Silverlight, Flex) dead end.

Previously the advantages of commoditised applications were presented as a fait accompli on cost against a built solution. The compromises of bought applications, however, have severely limited the ability to realise value. Looking at more a more sophisticated cost/value approach to projects, perhaps build's time has come again...

Back to Articles