Although we have become accustomed to calling every major technology change a transformation and, more likely than not, adding the term ‘Digital’, we are far beyond the point where organisations are introducing digital technology for the first time. What is typically happening these days is modernisation.
This process is ongoing and will always occur as technology evolves. I also prefer to call it modernisation because it implicitly implies incremental change rather than big-bang major ‘transformation’.
Many organisations approach modernisation as a replacement effort, i.e. replacing one outdated all-in-one (monolithic) system with another modern one in a big-bang approach. This could take up to 36 months, and 80% of such efforts never achieve any results or simply fail. I have witnessed this many times throughout my 20+ years of experience building software for banking.
One of the major reasons these efforts are not successful is a very long feedback loop and a huge accumulation of changes and risks associated with these changes. This is what we simplistically tend to call ‘Agile’ vs. ‘Waterfall’. The benefit of implementing modernisations incrementally and iteratively is that it allows us to fail fast and small, learn, and adapt as we go.
The irony of our industry is that we believe if we prepare longer and better, we will manage to do it right. This is not how it works, especially not with the technology, but that is the topic for a different article. There is enough evidence in the industry that such efforts often end up with a new legacy that is more complex, cumbersome and difficult to maintain than the original one. There is even a name for this phenomenon: the Second-system effect.
The software industry continually strives to improve processes and architectural methods to reduce risks and improve efficiency. On the process side, we have moved to an iterative and incremental approach to software development that emerged in the late 90s with methodologies like XP Programming and informally even before that—something we call Agile today. The same evolution has occurred on the architecture side, with the development of modular architectures, autonomous components, and later fully decoupled microservices and serverless architectures at the application architecture level.
The direction is to make everything smaller and more manageable: smaller development cycles, smaller releases, smaller application components, and smaller applications themselves.
This approach significantly reduces risk and improves efficiency by giving smaller teams more autonomy and, more importantly, enabling the independent evolution of different parts of the architecture to follow the business’s ever-evolving needs. Small release cycles allow for a faster feedback loop on technical decisions and customer experience.
While this has been a tendency on the application development level for more than two decades, enterprise architecture and ‘transformation’ program management are only beginning to see similar improvements.
There is an exciting tendency to elevate what has proven to work for application architecture to the level of enterprise architecture and beyond. We’ve seen this with the Composable Enterprise approach introduced a few years ago. Composable Enterprise (or Architecture) is an attempt to elevate the decoupled (microservices) architecture style to the level of enterprise architecture by acknowledging the flexibility of defining the enterprise architecture as a set of self-contained, packaged business capabilities (e.g., Account Management) rather than much bigger and monolithic domain areas (e.g., Core Banking).
The Composable Architecture approach provides great flexibility to independently evolve individual parts of the enterprise, building for an ever-evolving future. This approach is excellent, but how do we get there? Continue reading.
Moving from one architectural approach to another is always challenging and carries many risks, including failure, overspending, and missed opportunities. This concern applies to individual software architecture as well as overall enterprise architecture. The topic of re-platforming has been debated for many years. Rich Mironov’s excellent article highlights some important aspects.
Most importantly, your current platform is generating your revenue. It is impossible to halt all development and improvements on your existing platform for any meaningful amount of time. Evidence suggests that the time it takes to re-platform is typically three times longer than the original estimate.
The old divide-and-conquer method comes to the rescue. Breaking things down into smaller units helps with software architecture, enterprise architecture, and program management.
Progressive Modernisation is an approach to replacing your existing legacy systems with modern ones step-by-step.
It starts with identifying the parts of your system that are highly volatile, require a lot of flexibility, and provide most of the competitive advantages. It also involves structuring your longer-term program to modernise these parts first.
A bank running all its products on a single monolithic core banking system sees a great opportunity in Supply Chain Financing. However, their core banking system doesn’t support that product. Meanwhile, the same core banking system serves their other products, such as Current and Savings Accounts and other daily banking products, well.
Combine the Composable Architecture approach with progressive modernisation and introduce an additional lightweight Product Engine that enables a fast, zero/light-customisation launch of a new product in weeks, leveraging that engine specifically for Supply Chain Financing.
The approach looks great on paper but comes with some significant challenges. One challenge is providing a consistent and seamless customer experience across one or more product engines and other “core” banking infrastructure. The goal is to ensure the end-user does not perceive any boundaries between different engines or, in a broader sense, different Packaged Business Capabilities.
The strangler approach is named after the huge Strangler Fig. These figs seed in the upper branches of a tree and gradually work their way down until they root in the soil. Over many years, they grow into fantastic and beautiful shapes, all the while strangling and killing the tree that was their host.
The approach is well known in the application architecture domain and is often used to replace or modernise existing systems. Instead of entirely replacing the old system in one go, one could build around the edges of the old system, gradually letting it overtake the capability until it fully replaces the old system, if that is the goal.
As discussed earlier, the main reason for doing anything in small units is to reduce risk. A strangler fig is the approach that does exactly that. It enables steady and incremental outcomes, and frequent and iterative releases allow you to monitor the progress and evaluate whether you are on the right track and whether the features and capabilities initially scooped out are still relevant, given the ever-changing context of the business.
Most often, existing legacy systems do not expose any interfaces that can be directly consumed by the rest of the modernised architecture. A key part of the Strangler Fig is a facade component that utilises the most relevant method to intercept external calls to the legacy system and translate those calls to the relevant protocol or method supported by the legacy system. There is always a way to intercept the call, even when no documented interfaces are available, such as Change Data Capture (CDC), Post-Commit Database Triggers, and Files.
Sometimes, interfaces are available, but these interfaces are outdated and inconsistent with how the rest of the architecture communicates, such as XML-RPC, SOAP, TCP/IP sockets, and others. The role of the facade is to adapt these to what is standard across the ‘new’ architecture.
Source: SOA Design Patterns
The more important role, however, is to ensure that the external interface of that facade, used by the rest of the architecture, is logically decoupled from the underlying legacy system so that any changes to the legacy, including its complete replacement, have little to no impact on the rest of the architecture. This enables the incremental and controlled strangulation of the legacy systems over time, reducing risk and achieving meaningful outcomes much sooner.
The success of modernisation or digital transformation heavily depends on the approach. Dividing the process and the changes into smaller units always helps reduce risk and allows adjustments to the ever-changing reality of the business context.
Applying successful practices that proved themselves on the individual software architecture level to the program management and enterprise architecture can bring significant value and increase the chances of success.
Decoupling your Customer Experience technology from the underlying Systems of Record ensures longer-term agility and flexibility, allowing you to gradually improve or modernise your underlying architecture without negatively impacting your customers’ experience.
Last but not least, I recommend reading an interesting case study that shares the experience of using a similar approach combined with radical agility to achieve meaningful results without replacing the legacy architecture in traditional banking.
Share this post