Apromore Blog Post V2

Apromore Recognized as a Leader for Second Consecutive Year in 2024 Gartner® Magic Quadrant™ for Process Mining Platforms

Read more

X
CPS 230 and CrowdStrike – A timely reminder

CPS 230 and CrowdStrike – A timely reminder

With APRA’s CPS 230 approaching, financial services must enhance operational resilience.

Learn from Apromore Senior Advisor Nigel Adams how, from recent incidents, companies can discover essential strategies and questions to ensure compliance. 

With less than twelve months until APRA’s new Prudential Standard, CPS 230, comes into force, it is essential to ensure that your organization is resilient to operational risks and disruptions. Given the pressing timeline, it’s an opportune moment to review progress and address some critical questions.

Recent headlines about significant security breaches underscore the urgency of robust operational risk management. From conversations with industry practitioners there’s certainly of a lot of activity underway: making sure the risk, obligations and controls register is up to scratch, testing controls, mapping processes, figuring out Business Continuity Plan (BCP) approaches and setting tolerance levels, as well as identifying material third- and fourth-party service providers.

While the Standard requires operational risk management to be integrated within the overall risk management framework, in my view, a good starting point would be to integrate operational risk management with day-to-day operations.

Broad Scope of CPS 230

The scope of the Standard is broad. Complying with the obligation requires not only coordination across many, many functions, e.g., product, risk, compliance, operations, and technology, but also a range of interoperable systems and tools. This will ensure that the output from the work is more than just a series of screenshots and snapshots stitched together in a PowerPoint presentation. Unfortunately, most of the traditional systems and tools used to tackle these tasks do not support an integrated approach and are no longer “fit for purpose” in a digitally transformed world.

Challenges with Current Systems

For example: thousands of hours are being spent manually mapping processes, yet we know that the output, i.e., the process maps, will almost certainly be inaccurate and will be out of date within a few weeks. What’s more they will sit in a process map repository, at best connected to the risk, obligations and control register with a hyperlink, there to show the regulator, but not utilized by those responsible for executing the process. We know that control testing is frequently sample based and manual, yet, with financial services processes being so fragmented, the probability of finding the “needle in a haystack” for many services is low with a sample-based approach. We also know that the same type of controls are widely deployed across an organization, e.g., four-eye checks, segregation of duties etc., but the actual implementation of the control will vary by process and business unit. This may be sufficient on a process-by-process basis, but the lack of consistency makes evaluating an enterprise-wide outage particularly difficult, given how entangled and interdependent so many financial service processes are.

Key Questions to Evaluate Progress

Here are some questions to help you check progress:

  • Will your operational risk management framework integrate directly with your operational processes?
  • Will you get real-time updates on violations and potential violations?
  • How long does it take to identify an incident?
  • How long does it take to get to the root cause?
  • Will your control testing cover both design and operating effectiveness simulated with actual transactional data in real time?
  • Does your approach, the underlying assumptions and supporting data focus on the “happy path” or the 85% of transactions that don’t follow the “happy path”?
  • How long does it take to notify the relevant Financial Accountability Regime (FAR) executives after an incident has occurred?
  • Is your BCP approach tested at a granular, transaction-event level based on actual data generated by the process or high-level averages and aggregates?
  • How confident are you of the tolerance levels you have set? Are you statistically confident or “gut feel” confident?
  • Do your third-party contracts provide for granular, event-level data access from both providers and their critically dependent providers?
  • How robust is your scenario planning? Can you quantify the impact of a fourth party service provider outage or is it based on the “best guess” of a subject matter expert?

If answering these questions left you feeling somewhat unprepared, you may need to consider a compliance-oriented, process mining approach.

 

Learn more about Apromore process mining for operational reslience in banking and financial services here.   

 

# # #

Nigel AdamsNigel Adams, Senior Advisor at Apromore, is a thought leader in service operations excellence, with deep experience in the banking sector. He has nearly 25 years of experience focused on creating enterprise value from operational improvement, risk management and performance optimization. Nigel is known for driving performance and transformational change at pace while leading large, multi award-winning teams in complex delivery networks. In addition to a consulting career at KPMG, he has brought his skills to bear for leading banks, including NAB and ANZ, focusing on global payments and cash operations, financial crime, and business performance.  

 

Sign up to receive latest Apromore content