Operational resilience is currently a hot topic in Financial Services, largely because of the impact that COVID has had on how customers interact with financial institutions. Almost overnight, the financial services industry had to cope with a large volume of transactions moving to digital channels at the same time as its employees were forced to set up home offices so that they could continue to work remotely.
This was a period of ‘extreme disruption’ during which some firms struggled to deliver customer facing services and it has prompted new regulations / guidelines which aim to ensure that business services will be less interrupted by any similarly disruptive events in the future.
Gartner’s definition of operational resilience is the ability of an organisation to monitor the people, processes and technology that underpin key business services and then react if / when key tolerance levels are breached.
The EU has published its Digital Operational Resilience Act (DORA) in draft form, which is focused on ensuring that firms can maintain resilient operations through any period of severe disruption. Deloitte estimates that, following a period of reflection for industry feedback, the final version of this legislation will be published in 2022.
In the UK, the FCA (Financial Conduct Authority) has recently published a policy statement PS21/3 which includes the requirement for all firms to identify their key business services and then document the people, processes and technology which support them. UK firms have until March 2022 to define their key business services and assess their impact tolerances, defined as the ‘first point at which disruption to an important business service would cause intolerable levels of harm to consumers or market integrity’.
But how should companies monitor Operational Resilience, so that they can react quickly to the next disruptive event (and yes, there will be one)? In our ‘Operational Resilience in Financial Services’ event we asked four experts to discuss the role that data will play in maintaining operational resilience.
Duncan Ash kicked the event off by discussing the need to break down organisational silos in order to ensure that all relevant data are available for analysis. He likened the ideal operational resilience monitoring environment to NASA mission control (which is interestingly sometimes called an operations centre) where a team of experts use telemetry from each piece of equipment to support decision making throughout the duration of a space flight. A similar approach is required to monitor key business services.
Will Cappelli recognised that the digital business of a financial services company is now effectively the business itself. He then noted that digital services are being rapidly migrated to the cloud and while the customer still sees a single application this is actually made up of multiple pieces that could be anywhere in the world or indeed solar system! He suggests that ‘to ensure the operational resilience of these digital stacks, we must be able to introduce new components extremely quickly. And should an incident occur, we need to be able to limit its impact.’ To do this, he advocates an Observability approach that includes the capture of all available data to facilitate AI which directs incident remediation and proactive management.
Simon Johnson then discussed the importance of using a risk-based approach to identify and prioritise response to security threats. Integral to this is the ability to orchestrate repetitive tasks, freeing up time for analysts to focus on higher value threats. Like Will, Simon highlighted the important role that cloud migration could play and he suggested that only by running your SOC (Security Operations Center) in the cloud can you be truly operationally resilient.
Finally, Haider Al-Seaidy discussed the growth of Financial Crime which has resulted from the migration of customers to digital channels. He introduced a framework that uses all relevant data to identify accounts or individuals who are engaging in Financial Crime. Like Simon, he also advocated a risk-based approach for identifying the most suspicious activities. To facilitate the most accurate detection, he argued that it is no longer appropriate to manage Security, Fraud and Financial Crime teams in silos because Financial Crime often cuts across these silos.
In conclusion, while it is true that Operational Resilience is partly a data problem, it is much more than this. Regulation will require financial services firms to use data in such a way that they can prove that they have managed the risk associated with the delivery of key business services without any detrimental impact to consumers or the industry itself. To do this, they will need to have a holistic view of all risks associated with these services. And to get this holistic view, they will need to have all relevant data in a single platform so that they can be alerted to any unexpected events and react quickly / appropriately. This will enable them to apply multiple lenses to drive analytics, including those that cover IT Operations, DevOps, Cyber Security and Financial Crime.
Ultimately I suspect that being able to build a control centre that delivers the same decision-making capabilities as NASA will be the critical driver of success.
We know that many financial services firms will be in the early stages of defining their operational resilience frameworks and we would relish the opportunity to partner with them as they strive to become more resilient.