Our ‘North Star’: CBA devs on track to deliver customer-ready features ‘in minutes’

Helen Lau, Banking Summit

The Commonwealth Bank of Australia’s (CBA’s) software engineering team is well on its way to delivering on a nearly two-year program to fast-track features development – “from developers and machines to customers” – in just minutes, says chief engineer Helen Lau.

Speaking at FST’s Banking Summit 2023, Lau said the bank had already “significantly reduced” its delivery timeframe – or “lead to change” – since she joined the bank 12 months ago.

The veteran software engineer was recruited by CBA with the objective of delivering a “revolutionary” transformation of the bank’s software development cycle, from “months to minutes”, she said.

With a professional background spanning the technology, resources, retail, and telco sectors, Lau said she quickly recognised, upon joining a financial services firm for the first time, the immovable hurdle that is regulatory compliance within a bank’s development cycle.

“[In banking] we have a lot of checks and balances. One of the big learning curves for me was that it’s not actually the developers that are slow on shipping features into production. It’s the entire regulatory compliance [process], those highly regulated steps – those line one, line two, line three risks – and in checking off all those processes.”

However, CBA identified areas in the dev process still ripe for change. Following an internal survey of its engineers – numbering some 8,000 staff – CBA’s tech team pinpointed multiple capabilities they felt missing but necessary to fast-track the features development process.

Among the engineers’ three most pressing demands were for: 1) local admin rights (that is, the ability to run code on local machines with full privileges); 2) the ability to easily share code and improve collaboration with external vendors and internal developers; 3) for unrestricted internet access.

“When I came into the bank, I realised that full internet access is not a thing that everyone can have when using a bank’s laptop,” Lau said.

“This is due… to the mandated compliance checks we need to have on our laptops, because that same laptop can connect into production to see customers’ data.”

While the engineers’ requests were reasonable – if challenging – to implement, they did require additional protections to secure sensitive datasets. This began, first and foremost, with the adoption of a Zero Trust-backed identity framework.

“We carved them out within the infrastructure, so they’re not touching the bank’s production system. There’s zero chance of data leakage, compromise or breach that could happen after granting [our engineers] this [extended] access.”

In the past, where data centres dominated enterprise operations, a firewall was likely sufficient to secure digital assets, Lau said.

However, one small opening in the firewall exposes the entire network to a potential breach. “Once you open the [the IP and port] door, it’s open forever”, Lau said. “You can’t shut it down, and you can’t validate.”

Today, with applications, development tools, and data overwhelmingly spread across multiple public cloud hosts, Zero Trust (“trust no one – only trust this application if you have the right authentication token coming in with this trusted identity provider or certificate issuer”) is fast becoming the prevailing security model.

“Think of public cloud as diving into the open ocean – you shouldn’t trust anyone that comes into your system,” Lau said. “At the front of your mind, you need to think, ‘If I’m in public cloud, the chance of being hacked is anytime, so how can I secure my application by ring-fencing?’.”

Additionally, on the security side, provisioned engineers’ laptops were set up on a network entirely segregated from the rest of the organisation (i.e. ‘network segmentation’). All laptops, Lau confirmed, are also SOC 2 compliant.

Making fast-tracked development cost-effective

The modern DevOps process increasingly relies on the adoption of public cloud infrastructure.

As such, ‘cloud-first’ thinking (that is, rearchitecting software for a cloud operating environment) has rapidly become the prevailing orthodoxy in enterprise software development. For Lau, simply “lifting and shifting everything out of your data centre into a public cloud” not only imports legacy needlessly, but also serves as an enormous and costly drain on resources.

“If you do that blindly, I can guarantee your operation costs on your hosting will triple.”

“We’re saying, when moving to the cloud, you have to use and rethink how you architect your applications. That’s part of the tooling thinking of migration.”

Across a three-to-five-year investment, Lau stressed, the costs of provisioning hardware for a data centre are in fact cheaper than a pay-as-you-go public cloud bill. The real benefit – and, indeed, cost-benefits – of cloud for organisations are realised through its ability to rapidly scale up and down as and when needed.

“What people always forget with an on-prem data centre [is that] you always have to buy your hardware catered for the max workload. When you move it to the cloud, however, you have auto-scaling, so the cost can go up and down as you go.”

External software and SaaS licencing costs can also be considerable – and are often poorly understood by decision markers on the business end.

“From a tech point of view, I can tell you, you can licence an application in many ways. And you need to understand the tech and how your user uses your application to get the best benefit and only pay for what you use.”

“You need to understand usage, and you need to understand whether it’s a monthly average user license here or you’re a main user, so on and so forth.

“Those things really need a tech person to be on the table to make this decision together with the business.”

Automating the software dev process

The backbone of the DevOps development cycle is the CI/CD (Continuous Integration/Continuous Delivery) pipeline – a software development workflow that effectively automates large sections of the software build, test and deployment processes through the use of a shared code repository.

However, Lau said, scaling and maintaining consistency in the CI/CD process can prove a challenge for many dev teams.

“How we’re doing that is, we start small with application X and then we build the pipeline. And then, as we go, we keep on iterating,” Lau said.

“We try to build pipeline as code, meaning that, when the next application comes on, they don’t need to do any of that heavy lifting, they can reuse all the pattern done in application X, and reuse it over time.”

“Off the top of my head, CBA [has] around 4,000 applications that we deal with. Think of how much pipeline needs to go underneath that. That’s what I mean by building as code and build a reusable pattern.”

Through this process, Lau said that CBA’s dev team has reduced manual code checks by around 40 per cent of what was previously required.

She added that governance checks – part of regulated entities’ regulatory (APRA) requirements and cyber requirements, such as vulnerability scanning, must be “baked in” to the developer pipeline whilst devs are “cutting the code, and before it’s shipped into production”.

“That’s very important,” Lau said.

“It’s really about the whole ‘shift left’ mindset. And what I mean by ‘shift left’ is the more into the developers’ keyboard you go, the less cost it is for the business and operations to fix issues down the line. That’s the whole idea behind building in these checks.”