Design rules for business and technology architectures
14 March 2016
The convergence of new businesses models, new technologies and new regulations are giving many capital markets firms the impetus they need to resolve structural inefficiencies that have been endured for decades. This is the first time we have had a real chance to rethink the established organisational design of product silos on one axis, and front-middle-back office functions on the other.
The matrix structures adopted by many banks do not exist in the natural world. They produce unaccountability from suppliers in the matrix and competition between the buyers. So perhaps there are more adaptable designs that we can adopt when reshaping our firms today? If so, are there useful design principals that we can use to underpin them?
Extract-Transform-Load (ETL) technologies that try to impose order upon the chaos of piecemeal-developed systems have struggled to tackle real use-cases. The requirements of end-users often demand more sophisticated data processing than ETL can support and the changes to the underlying platforms are sufficiently frequent to make ETL expensive to maintain. Furthermore, as unstructured data and fabrics have come to the fore, these have opened up new opportunities – and new threats – for organisational redesign.
The following guidelines have proven useful in designing processes, reorganising functions, developing new workflows and exploring the organisational impact of these emerging data management technologies such as Hadoop and data ‘lakes’.
THE FUNCTIONAL BUSINESS STRUCTURE
All businesses are processes. They consume capital and raw materials and process them into products and services. This is easy to visualise in manufacturing, and although it is equally true of banking, it is less easy to identify and map out the process. Investment banks process capital from their own balance sheets and parcel it up into services and products. A buy-side firm uses clients’ capital and continually processes it to increase its value.
Sometimes the value is based upon the expertise of the firm – these processes are ‘core competencies’ – and sometimes it is based upon the infrastructure or scale of the firm and its ability to process high volumes – ‘core capabilities’. Either way the value-generating processes are competitively differentiating and need to be thought of as profit centres. Those common processes consolidated into shared services with a view to minimising duplication of effort (often candidates for outsourcing) can be thought of as cost centres.
A visual representation of the above description will present a model of a production line, with various actors working upon the capital being processed according to their value-generating potential. Since the capital is continuously being reprocessed, the production line will be circular rather than linear. But the important point here is that the structure of the organisation must reflect its economic activity.
From this description we can start to question the wisdom of structures such as the front, middle and back office – when in truth a more granular arrangement better reflects the real shape of the organisation. We can seriously question the popular fad of horizontal and vertical organisational structures that create internal competition and ambiguous ‘dotted’ reporting lines.
The reason why financial firms are comparatively hard to navigate as an employee, or restructure during programmes of change is because actual organisational structure rarely reflects their economic value-creating process.
Manufacturing firms have the advantage in that they are obliged to physically organise themselves around the processes that act upon their incoming raw materials and add economic value to create products – they need to organise themselves efficiently and it is easy to visualise this in practice.
Because capital is not a physical commodity, firms that process it have not been forced to consider their efficiency as much as if it were. The fact that some hedge funds physically organise themselves around their investment process (with proactive alpha-generation activities seated to the side of those performing more defensive and risk-mitigating roles), and that few traditional fund managers do, may help to explain the relative merit of their business model.
As flow businesses become the benchmark model broker dealers, as they consolidate platforms and functions, as they become more serious about outsourcing cost centres and creating interbank utilities, and as STP starts to drive process efficiencies, we see them morphing towards operating models that more accurately reflect their true economic activity.
However, there is no good reason why this evolution needs to be as piecemeal as it is: with clear design principles we can think more proactively about what our futures may look like.
THE RULES OF GOOD DESIGN
Establishing guidelines on what constitutes ‘good organisational design’ helps to remove personal agendas and enables people to concentrate on what is best for their firm. Here are some example design rules:
Rule 1: product neutral processes
Many banks have developed a model of a standardised financial object (often called a Unified Financial Object), with a view to assisting their many business functions in becoming product or asset agnostic. The goal – that all platforms and functions can process every product – implies massive consolidation and the cost-savings that result from that, as well as increased organisational adaptability, and the ability to focus investment spend exclusively on core competencies and capabilities.
Rule 2: no data enrichment after creation
Technically called ‘immutability’, this rule demands tight adherence to those functions that contribute data to the organisation, to ensure that it is correct at the outset. If data needs to be subsequently updated, then this update is a defined and separate transaction that is the responsibility of the data creator, as distinct from the consumers of the data having to discover corrections themselves. This rule places a heavy demand on the front office to be a team player in enterprise data creation.
Rule 3: ownership of data
Every business function needs to be aware of its purpose and role in the context of the entire production line of capital - this is described in a separate paper Enabling data. If the tests of ownership fail, then often these reflect poor organisational design. A poor structure cannot be overcome through good operations, these structural flaws need to be corrected first.
Rule 4: minimise interdependencies
Optimal organisational design collates individual distinct roles together in such a way that they minimise the number of inter-function interactions and dependencies. Functions that have a lot of routine interactions are candidates for consolidation – essentially these teams are performing the same or a similar job.
Conversely, too many roles within a business function can lead to functions that are broadly defined and become amorphous, unmanageable and unaccountable – how many banks suffer large back office administrative functions that have little incentive to operate leanly and are often the tail that wags the dog when enacting business change? These need to be decomposed into functions that reflect the economic activity of the organisation.
So when considering multiple representations of various business functions, often it’s the configuration with the least interdependencies that is most optimal.
SUPPORTING TECHNOLOGY ARCHITECTURES
Rule 5: one-to-one relationship between business function and technology services
Shared technology assets have long been promoted as a good economic idea – we called it ‘reuse’. This is fine at the non-functional level – infrastructure and commodity technology – databases management systems, grids, communications. But when functional technology – class libraries, database instances, specific message types, application components – is shared between users it introduces competing and conflicting goals when change is required. This makes the organisation as a whole unchangeable; upgrades need to be negotiated, requirements are horse-traded, testing becomes very complicated, nobody is satisfied and the whole firm pays for it.
CHALLENGER TECHNOLOGIES TO SOA ARCHITECTURES
The above design rules presuppose a migration towards a Service-Oriented Architecture. The use of services allows organisations to abstract business functions away from the names of the platforms that are undertaking them, and thus shift the mindsets of users and technologists away from a platform-centric or technology-centric view, towards one where they are thinking about the business function they are facilitating.
In terms of the way a lot of people think about realising SOA, it is often about discrete services with clear interfaces and well-defined inter-service messages, where each service has ownership of its own dedicated data store. Hence each service is often backed by a traditional RDBMS server.
However, the use of very large-scale data warehouses and unstructured data in the likes of Hadoop, now challenges conventional SOA thinking. And since SOA closely reflects the way we think about business functions, we want to see how these techniques can be used to improve the ways in which organisations and data strategies are designed.
UNSTRUCTURED DATA AND ORGANISATIONAL DESIGN
The promise of Hadoop et al is that the need to analyse data early on in the lifecycle of a platform can be deferred until later. By ‘storing everything’, all possible future use-cases can be catered for.
This promise can hold true, but must not be used as an excuse for avoiding data analysis – if there is a specific use-case then the cost saving for a traditional storage solution can be vast. For example, Hadoop is a good candidate technology for customer profitability analysis, since the potential number of factors to be modelled is indefinite. By storing every tick from every provider of liquidity, and every client interaction, we can gather everything we need for Transaction Cost Analysis and client profitability analysis. But in practice, we can aggregate the same data with minimal loss of information and achieve a million+ fold reduction in data volumes using columnar databases.
But the potential use and misuse of unstructured data solutions goes a step further: unstructured data allows us to support business processes that are unstructured, but a healthy degree of challenge can result in the business process becoming more structured.
We are seeing this as a by-product of compliance with the MiFID II SI regulation; segregation between pre-deal and dealing floor activities is imposing a more structured approach to pre-deal workflow. Voice trading has never been touched by workflow technology and expensive telephone monitoring has been deployed on the assumption that this business activity cannot be systemised. Now that the impetus exists to automate large portions (and indeed, there is significant commercial benefit to be gained from doing so), the effort being applied to structuring this workflow is driving cheaper and more efficient (and structured) technology solutions.
DATA WAREHOUSES
In our discussion on the functional business structure realised via SOA, we assume a functional representation as being the best way to represent the business. But the other data management technology that brings organisational challenges and opportunities with it is the provision of enterprise-scale homogeneous data, particularly in risk management platforms.
This slices through the business horizontally – typically with data at the bottom and then layers of analytics, aggregation and visualisation. This model is well applied where a large user-base needs access to homogeneous bitemporal data that represents the organisation in very clear ‘states’ (e.g. last close of business, latest flash snapshot) and through well-accepted taxonomies – with very clear controls around quality, meaning and interpretation of data. This is fundamentally an analytical use-case rather than an operational one, and its position to SOA is analogous of the different purposes for OLTP and OLAP technologies. Good use-cases exist in risk management. For example, regulatory reporting, and in investment strategy development, and capital management – all cases where highly consistent enterprise-wise analyses need to be performed.
Often these platforms need to support sandbox functionality in production platforms, where users can temper the data and override the analytical functionality, provide archiving, re-basing of time-dependent parameters, allow endless forms of roll-up logic.... In essence, a very broad range of analytical yet non-operational needs are served on a very tightly defined data set.
So the parameters governing such a data model are:
Dimensions:
- navigation by taxonomy, or multiple taxonomies, or via aggregation logic,
- time steps on a bitemporal model,
- product classifications, counterparty/netting set,
- business unit and distribution details.
Measures:
- economic and market details,
- risk factors,
- operational details,
- sensitivities (Greeks and other trade-level measures),
- exposures (VaR, PFE, XVA...).
Against all of these factors an organisation needs to have clear definitions of what the data means and how it should be interpreted.
The uses of this platform within an organisation also need to be well understood; its use in workflows, reporting, stress-testing, model-development... deficiencies in the platform are likely to spawn a multitude of end-user computing workarounds, which can be avoid if care is taken up front with its design.
Such a platform requires sophisticated access and control mechanisms; that support cost-attribution and runtime controls to ensure that limited computer resource is acquired and that all uses can be recharged according to product, client and purpose.
In summary, while the SOA model supports good data management in the vast majority of a financial organisation, there are specific communities such as risk and finance that will benefit from large-scale homogeneous data platforms. Such communities will also reap rewards from other specific functions such as research, which will benefit from unstructured data management technologies.
The right tool needs to be chosen for the job, and both the tool and the organisation need to be designed to interact efficiently. The tools for data design are well-known and these can be integrated with some simple organisational design rules – such as those presented here, which enable the data platforms to work harmoniously in their real world environment. Yet adhering to either set of rules without the other will very likely cause both pain and expense.
How can we help?
Our deep knowledge, project-based engagement model and technology accelerators means we deliver your objectives to our own high-quality engineering standards. The results are rapid adoption with a long and reliable service life, good value and satisfied customers. Find out how we can help you...
Read more