What's New in Apromore?

Read More

X
Finesse the Plan with Process Mining

The annual budgeting or planning cycle can seem like a never-ending process. No sooner has the current year’s budget been locked in and cascaded down through the organizational hierarchy, than the timetable announcing the kick-off for next year’s planning process is launched. Not only is it a long process, but in mature financial services organizations it can also be extremely challenging. The finance function typically has the unenviable task of reconciling the “bottom up” ambit bids from across the various business units, departments, and teams with the Board’s/Senior Executives’ “top down” expectations. As the end-of-year deadline approaches, the horse-trading accelerates. Let the games begin!

An Organizational Tower of Babel

Company politics aside, one of the underlying issues is that budgets tend to be prepared functionally and the relationships between the activities that various functional teams undertake to deliver an end-to-end service are not always taken into consideration. In financial services, this is made worse by a lack of operational data underpinning the relationship between revenue and cost. I have lost count of how many times I have had a conversion which goes along the following lines: “Volumes are growing by X%, our investment program will deliver productivity savings of Y% leaving a resource shortfall of Z% to maintain service and quality levels, but we are tasked with reducing headcount by XX%. You’ll just have to find a way to make up the gap”. “Find a way” usually depends on the success of continuous improvement (CI) programs, but frequently the gap is far wider than a typical CI program can deliver in any one year. All too often the point seems to fall on deaf ears. Imagine the same situation in manufacturing: “Guys, we need to produce 500 cars per day, unfortunately, we only have budget to purchase 475 engines, but do your best, I’m sure you’ll figure it out.”

This lack of transparency along with organizational pressure to come up with a number that is acceptable, but one that your people believe is achievable, it is no surprise that the process quickly assumes Tower of Babel-like characteristics. Leaders of the various teams involved in marketing, selling, and delivering the relevant services squabble noisily over how to get the best outcome.

Spreadsheet Limitations

Trying to articulate the impact on resource requirements of changes in service mix, growth etc., is not easy, especially when the underlying drivers of demand – the revenue “ask” in the budget – can change quickly as the planning process grinds to a conclusion. Even when the process metrics do exist, it is rare to see them gain visibility beyond the Operations function and rarer still to see them integrated into financial driver trees. Moreover, their value is diminished by the role spreadsheets continue to play in the budgeting process in many organizations. Without specific Add-Ins or a significant commitment to coding and customization, trying to capture the nuance of production flow, with volatile demand and unpredictable resourcing just doesn’t work. Spreadsheet cells are populated with point data such as average productivity levels and average demand levels, but the inherent variability and related distribution curves for both dramatically impact the outcome on any given day and can give very misleading results in terms of resources required on any particular day.

Process Simulation – The Budgeting Rosetta Stone

What is needed is an organizational Rosetta Stone that allows the various leaders to translate the operational and process drivers into financial outcomes in a way that communicate clearly and effectively the impact of various scenarios on organizational goals without falling into the sub-optimal trap of responding to political agenda. This is where process mining comes in. If you can predict the resource requirements for a given level of demand and understand the demand profile, then you are well on your way to determining the cost base required. Even better if you can visualize this, showing how service requests arrive throughout the day/week/month, how and where bottlenecks build, where the path forks, which activities consume the most resources and the impact of the variability, there is a much better chance of getting everyone on the same page, and, hopefully, an easier and more constructive budget conversation.

This is precisely what you can do with process simulation. Not only are you modeling the exact process as it operates today, but the historical data also provides you with all the inputs you need to model the variability and its impact on process performance, including resource requirements. You can then answer “What If” type questions. If you increase revenue by X%, what are the implications for service if resources are held or resourcing if service is maintained.

Even better, you can demonstrate the likely impact of the investment and the CI program as each project goes live by adapting the simulation scenario: potentially removing an activity, changing a gateway, or adjusting resource costs etc. This also helps manage expectations when there is an assumption that the time to benefit is immediate. In reality, certainly the projects that I’ve been involved with, there is an initial drop in productivity as users get used to the new system, or bugs are still being dealt with during the warranty period, or the old and the new systems are running in parallel, and this is causing confusion. Without tools such as process mining, delaying benefit realization is a very difficult conversation to have with program sponsors keen to demonstrate a favorable return as soon as possible.

Where to Start

As with most new approaches, the suggestion is to start small. Bring the respective partners to an end-to-end process together. Introduce them to the process mining discovered process and help them understand what lies behind the complexity of the spaghetti on the page. Then show the process in simulation mode, drawing on recent examples of when backlogs have appeared, or a system incident has slowed or stopped production and the impact this has had on output. Finally, introduce some changes, particularly to the demand profile, so that everyone can see the impact of higher demand and just how many additional resources are required, before and after the productivity savings have been banked!

 

Nigel Adams, Senior Advisor at Apromore, is a thought leader in service operations excellence, with deep experience in the banking sector. He has nearly 25 years of experience focused on creating enterprise value from operational improvement, risk management and performance optimization. Nigel is known for driving performance and transformational change at pace while leading large, multi award-winning teams in complex delivery networks. In addition to a consulting career at KPMG, he has brought his skills to bear for leading banks, including NAB and ANZ, focusing on global payments and cash operations, financial crime, and business performance.

Sign up to receive latest Apromore content