With APRA’s CPS 230 approaching, financial services must enhance operational resilience.
Learn from Apromore Senior Advisor Nigel Adams how, from recent incidents, companies can discover essential strategies and questions to ensure compliance.
With less than twelve months until APRA’s new Prudential Standard, CPS 230, comes into force, it is essential to ensure that your organization is resilient to operational risks and disruptions. Given the pressing timeline, it’s an opportune moment to review progress and address some critical questions.
Recent headlines about significant security breaches underscore the urgency of robust operational risk management. From conversations with industry practitioners there’s certainly of a lot of activity underway: making sure the risk, obligations and controls register is up to scratch, testing controls, mapping processes, figuring out Business Continuity Plan (BCP) approaches and setting tolerance levels, as well as identifying material third- and fourth-party service providers.
While the Standard requires operational risk management to be integrated within the overall risk management framework, in my view, a good starting point would be to integrate operational risk management with day-to-day operations.
The scope of the Standard is broad. Complying with the obligation requires not only coordination across many, many functions, e.g., product, risk, compliance, operations, and technology, but also a range of interoperable systems and tools. This will ensure that the output from the work is more than just a series of screenshots and snapshots stitched together in a PowerPoint presentation. Unfortunately, most of the traditional systems and tools used to tackle these tasks do not support an integrated approach and are no longer “fit for purpose” in a digitally transformed world.
For example: thousands of hours are being spent manually mapping processes, yet we know that the output, i.e., the process maps, will almost certainly be inaccurate and will be out of date within a few weeks. What’s more they will sit in a process map repository, at best connected to the risk, obligations and control register with a hyperlink, there to show the regulator, but not utilized by those responsible for executing the process. We know that control testing is frequently sample based and manual, yet, with financial services processes being so fragmented, the probability of finding the “needle in a haystack” for many services is low with a sample-based approach. We also know that the same type of controls are widely deployed across an organization, e.g., four-eye checks, segregation of duties etc., but the actual implementation of the control will vary by process and business unit. This may be sufficient on a process-by-process basis, but the lack of consistency makes evaluating an enterprise-wide outage particularly difficult, given how entangled and interdependent so many financial service processes are.
Here are some questions to help you check progress:
If answering these questions left you feeling somewhat unprepared, you may need to consider a compliance-oriented, process mining approach.
Learn more about Apromore process mining for operational reslience in banking and financial services here.