By continuing to use this site you are agreeing to our use of cookies, as set out in our privacy policy.

« go back Comment Profit_attribution.jpg

Navigating P&L Explain

How should large financial institutions attribute profit and loss (P&L) and how does this map to your organisation?

Michael Armeno 21 February 2018

Are you equipped to provide traders with accurate P&L measures from IT to meet their SLAs? Do the controllers and risk managers agree with the representation and are internal auditors satisfied with the methodology?

In order to run a profitable, capital-efficient and compliant equities and FICC business, the above questions must be answered.

While P&L Explain (alternatively known as P&L Attribution) is a fundamental process, the accuracy, representation and timeliness of the results are often a source of disagreement among departments – and that is even before the regulators become involved.

The following is a brief examination of the issues that contribute to unexplained P&L (Figure 1) and methods to reduce unexplained to zero and align front-office, risk and finance P&L.

 

PnL3.jpg

Data

Many institutions utilise several data vendors since all have their strengths and weaknesses – be that timely price dissemination, accurate asset servicing information or ETF composition data. While seemingly logical, one golden source for data is not always a reality due to timing considerations or varying data granularity requirements (e.g. time series versus closing price).

A second challenge is that institutions still tend to operate in a siloed fashion. Risk and finance typically use a single global daily snapshot, whereas trading desks calculate their daily P&L at the close of each business region. With risk and finance often residing in an institution’s home country, the time discrepancies from regional trading desks can wreak havoc when calculating accurate P&L Explain.

Institutions sometimes address this issue by moving to a global end-of-day that aligns the market data snapshot and open position snapshot. Additionally, some firms have established a service bureau model for data consumption. This solution tends to drive cost savings through eliminating multiple purchases of the same data within an institution.

Methodology

Institutions typically utilise one of two methodologies to calculate P&L – Risk-based (sensitivities method) and Revaluation (scenario method).

The Risk-based methodology begins with calculation of Greeks and then uses those values to predict the expected one-day change in P&L using the actual market price change over one-day.

The Revaluation methodology executes a series of valuation scenarios – one for each factor driving the transaction value, whereby each scenario changes the value of one factor over a one-day horizon. The final result is calculated by aggregating the impact of these valuation scenarios.

During spikes in volatility, large gamma moves among expiring options or outsized market moves the risk-based methodology fails to attribute all P&L to the Greeks. Therefore, in selective positions it may be advisable to overwrite the risk-based P&L with the values calculated from the Revaluation methodology for specific positions and retain instances where such events occurred on an exception report.

A P&L Explain process that is aligned across front-office, risk and finance will produce accurate P&L Explain back-testing for purposes of moving toward an Internal Model approach – and will ultimately reduce a desk’s capital requirements.

Pricing Model Optimisation

As institutions expand their exotics trading businesses, ensuring the models accurately explain P&L is a challenge. FRTB regulation stipulates that firms must closely monitors breaches and if a model exceeds its allowance it will lose internal model approval and trigger additional risk charges.

Risk and front office models typically have different trade-offs between speed and accuracy, with ‘front office-owned’ models often using approximations to do the far greater number of revaluations required for measures such as historical Value-at-Risk (VaR) and Expected Shortfall (ES). This pressure on calculation time has only increased and some institutions have chosen to utilise elastic grid computing to distribute risk calculations across a pricing framework.

Technology

Data aggregation across risk systems has become increasingly complex. Many institutions have recognised that full consolidation might be too costly a step and have chosen to leverage existing investments in front office systems and pricing models to feed P&L explain processes.

Other institutions have re-architected their infrastructures to allow reuse of front office models by other areas of the firm through APIs – in an effort to align front office and risk for purposes of P&L Explain. While this investment is largely to conform to regulatory compliance and capital requirements, we believe the ultimate pay-off will enable future efficiency.

P&L explain is heavily dependent on upstream data sources, front office risk engines, market data sources, valuation batches and finance reporting tools. All too often existing Explain processes are so complex that they require the whole process to be re-run after the upstream failure is cured, contributing to delays. An Explain process that can respond intelligently to failure and only reload failed dependencies limits the impact.

Operational Control

Regulators are increasingly focused on enhancing the control environment within institutions. Failures in trade capture, non-STP derivatives lifecycle events, or trade amendments all impact the P&L Explain process. Institutions have begun to re-engage programmes of work pertaining to STP and T0 governance to correct these issues.

Dynamic Reporting

Proper presentation of P&L Explain is a vital step in the process to ensure timely and accurate interpretation of the results. Several institutions have been able to leverage prior investments in business intelligence applications and pivot away from building bespoke reporting applications for the purposes of report building and ad hoc what-if scenario generation. We recommend examining whether an institution’s risk reporting capabilities are consistent guidance by BCBS 239 ‘Basel Committee on Banking Supervision - Principles for effective risk data aggregation and risk reporting.’

Project Management

We have seen institutions dramatically change how and where they deliver projects. For highly complex projects such as P&L Explain, we have had success increasing momentum within organisations by establishing working groups closer to senior stakeholders. Also, since issues with P&L Explain are overlapping, it is important to prioritise by impact and deliver those first so that the noise can be cleared for further refinements.

We recommend institutions place an objective programme manager with oversight across multiple departments to increase accountability, improve communication with senior management and identify projects to close any remaining gaps.

Conclusion

In conclusion, we recommend starting with a review of a client’s operating model and data model as it pertains to P&L Explain. Data taxonomy, timeliness of data, global desk aggregation and validation controls must be all agreed before an institution can begin complex data transformations.