Transcript of "Enhancing developer productivity in legacy codebases"

Jack: Good afternoon, and welcome to our fourth Made Tech talks webinar, as a part of the Leeds Digital Festival 2020. Today’s subject is “Enhancing developer productivity in legacy codebases”. Our speaker today is our very own Senior Engineer, Scott Edwards. His Twitter handle is @Scotteza – as displayed on the screen there.

Before I hand over to Scott, I’m just gonna run through how today’s gonna go. It’s gonna be a 45 minute presentation from Scott, followed by a 15-minute Q&A at the end of the session, so please make use of the Q&A function found at the bottom of your screen, and we’ll do our very best to answer as many of your questions at the end of the presentation. Once the webinar’s concluded, we’ll be sending all of our attendees feedback forms. They take about a minute to fill out and they really help us to improve our future events. We’ll also be sharing a little bit of information on our next Made Tech Talks webinar, coming out later on in the month, as well as some information on our ebook: “Modernising legacy applications in the public sector” – and how you can get your copy if you don’t have one already.

I should also mention that this session will be being recorded, and that live subtitles are available throughout the entirety of the session. Information on how you can acquire those will be posted in the webinar’s chat function a few times throughout the presentation, just in case you missed it coming in. And with that, I am going to be handing over to Scott. Scott, if you want to take it away from here.

Scott: Yeah cool, thanks Jack. Hey everyone. Thanks for being here today, and thank you Jack for your intro. Today’s topic is “Enhancing developer productivity in legacy codebases”. We’ll be looking at ways to enhance the productivity of your development teams in existing codebases, and we’ll be looking at codebases here which are providing essential services, difficult to work with and regularly require updates to be made or new features to be added.

We’ll start by talking about why modernisation efforts are probably your best approach for enhancing productivity in these circumstances; then we’ll briefly describe three modernisation approaches you could take; and then most importantly we’ll give you some actionable steps for getting started with this process.

Today we’ll be focusing on an approach called Iterative Modernisation, and we’ll talk about how it ties in with enhancing developer productivity. We’re also going to be focused on the strategic approach to modernisation here. So we’ll keep the technical side relatively light touch. If you’re interested in the technical execution of the strategy at a code level, I’ve got a couple of recommended resources right at the end that you can use to learn more about the in-depth technical processes behind this approach.

So before we begin, let’s drill into a little more detail in what we mean when we talk about a ‘legacy codebase’. There are several definitions out there, but we see common threads amongst them. So gap.uk guidance describes technology as legacy technology when one or more of the following points hold true: If it’s considered an end of life product; if it’s out of support from the supplier; if it’s impossible to update; if it’s no longer cost effective; and if it’s considered to be above an acceptable risk threshold.

Now developers will sometimes refer to legacy code simply as ‘code without tests’ – and today we’ll be focusing on enhancing developer productivity in legacy codebases – so we’ll use a definition based on that, and focused on the day-to-day activities and challenges of being a developer in a legacy codebase. So we’ll be defining a legacy as code that’s considered unsafe to work with, and code that teams do not have the confidence to modify. The common scenario here is that seemingly minor changes in these fragile code ecosystems lead to large system-wide instability, both immediately and down the line. And so we’ll be looking at methods to resolve these sorts of issues, which also take into account that initial developer-level definition of legacy code, as code without tests.

Today we’ll be using an example organisation to talk about this process. As always, we have to keep it quite high-level when we talk about them, to protect identities of those involved. In this case, our example organisation was pretty deep into the legacy trap, and they had several large applications which development teams were pretty terrified to touch. One of these, which we’ll be focusing on today, was their financial reporting application. This application had evolved from a useful tool for the Finance team, into the entire backbone that ran the financial side of the organisation. It was brittle, it was hard to maintain, it was a mess of spaghetti code and ancient dependencies. It had no automated tests and it was being manually deployed by developers by logging onto the server and overwriting files when necessary. So, quite a scary place to be for this organisation, I’m sure you’ll agree!

Getting developers working safely and effectively on this codebase was considered a vital strategic outcome for our example organisation. No names mentioned. And we believed that modernisation effort was a strong contender for enabling this outcome, but we had to convince stakeholders at all levels of this, and so we had to make sure that we were clear on the answer to the question – why modernise?

So this is an extremely important question, we need to figure out – how modernisation ties into today’s theme of developer productivity, and we need to remember that the modernisation process comes with its own set of challenges. So this question is all about deciding whether the investment in modernisation is worth it, and it’s very important to both understand and be able to explain why you are choosing to take this approach – if for example you intend to convince senior stakeholders to allocate time and budget to it.

So, our first point here is that legacy code affects everyone. It affects your senior stakeholders – if you’ve got that slow delivery cadence and risk aversion that legacy applications create, it makes it extremely difficult for senior stakeholders to empower their teams to achieve the organisation’s strategic goals. It affects our software delivery teams – So once again we’ve got that risk aversion with brittle software that turns this normally fun creative process of coding into a bit of a nightmare actually, especially when you’re under some form of pressure to deliver, and are not empowered to spend time cleaning up your codebase that you work in every day. And it affects our users, our end-users. They suffer the most from the slow-release cadence, the buggy software, the software they can’t rely on, and this remember affects both our internal customers – so people who work in our organisations – as well as the citizens that use public-facing services.

It can be quite tempting to keep kicking the can down the road when it comes to legacy software. What I mean here, I’m talking about ignoring the drawbacks of legacy code and carrying on tweaking it and building on top of it. While this approach may work in the short-term, in the long-term you will be both slowing down and frustrating your development teams, and thus lowering your feature delivery cadence. To use a common metaphor here, you are putting yourself at risk of collapse when you attempt to add layers to a skyscraper that’s built on shaky foundations.

If you’re in public sector, there’s also the Government Digital Services, aka GDS Mandate, to talk about. So this was something that came out in July 2019. The House of Commons Select Committee for Science and Technology – stated that legacy applications are a significant barrier to effective government transformation and digitisation, and they went on to request that GDS conduct an audit of all legacy applications across government, to be completed by no later than December 2020 – which is right around the corner!

So if you are in public sector, modernisation is coming, and it will become progressively harder for you to keep kicking that can down the road. But I would say, most importantly, for today’s particular topic, you’re also modernising to gain the benefit of increased developer productivity in your codebases. And you may of course be asking – I don’t know if you achieve this. Don’t worry about that, we’ll be getting to that when we talk about the iterative modernisation process in a little while.

But first, let’s move on and look at some of the risks. Now naturally in any big undertaking, there is risk involved. So let’s run through some of those risks, and then we’ll talk about mitigation steps next. First up, unsurprisingly, is cost. This is often the first thing that will be raised. This process will expend some CapEx as an initial upfront cost, to negate longer term OpEx. These costs may include beefing up your team with consultants or contractors in the short term. You should be relying on those consultants and contractors if you do that, to train and empower your teams to carry this process on forward. You may also be allocating time to your developers away from other activities, such as the day-to-day feature delivery.

Remember I mentioned modernisation is not an overnight process, and you will have this constant need to keep your teams aligned. So you have to avoid a situation here where your development teams are given a set of instructions and requirements, and then given some pizza and sent off to a basement somewhere to bash out code for six months, without coming back to realign with overall goals.

Similarly, you have to keep an eye on your user bases, of all the needs, and keep re-evaluating if you’re on the right path to meet them. Importantly, you’ll need to generate buy-in at all levels. So your senior stakeholders need to see the value gained from the modernisation process. Your development teams may actually resist you here – they may see this as extra unnecessary work. And to add to that, some of the technical techniques and steps within the legacy transformation process are seriously difficult, especially if you haven’t done them, if you don’t have that experience beforehand.

And then your users – it’s one of those strange quirks – your users may have gotten used to the issues in your applications. They will probably have found workarounds for those issues which are now hardwired into their way of working, and so some users may actually resist fixes and improvements because they see it as change in their process, their defined process.

And finally, you may find technical blockers in your path. So these are the sorts of things that would probably map out to the same blockers which your development teams face day-to-day, and will be exacerbated by the complexity added to your application over time, especially if it’s a particularly old application. Remember that skyscraper metaphor from earlier.

So we’ve spoken about risks. You’re probably wondering how we work to mitigate these risks. So we have a few suggestions on these points, but before I run through them I want to be clear about what we mean when we say ‘mitigation’. Anyone who’s dealt with risk probably understands this already, so risk mitigation is not about eliminating risk, because that’s often not possible. It’s more about developing a plan to manage risk, and to explore various actions that could be taken in different scenarios. So it’s about lessening the seriousness or extent of these risks, should they materialise.

So running through our risks that we defined before: to minimize capex, you need to focus on targeting the highest value applications in your modernisation efforts. We’ll talk about how to do this a little bit later on. In terms of time allocation, you may need to temporarily beef up your team with consultants or contractors. If you do this, we would absolutely suggest that you set the upskilling and empowerment of your internal teams as one of the outcomes that these consultants and contractors are measured on.

In terms of keeping people aligned, the most common complaint I personally tend to hear in any business I work in, is communication. So that should be your primary focus in terms of alignment, you should be feeding a consistent message to all of your teams, including constant comments with those end-users we mentioned, to understand if you’re still on the right path. Following an Agile way of working can help you here. Agile is built on communication, and this also helps to build inter-team trust, as well as to catch assumptions and issues early on.

For buy-in, we would suggest performing an organisational mapping exercise. So in this exercise you would be showing the various applications across your organisation, and how they communicate and map to business processes, and also show how they are blocking progress in your organisation’s day-to-day activity flow. This allows you to show your senior colleagues the value of removing these flow constraints. It shows your development teams how this exercise fits into making their day-to-day deliveries easier, and it shows your end users how you’re going to focus on delivering their requested features and enhancements more quickly going forward.

When it comes to tech blockers, we suggest an audit step of your existing applications. This is also something we’ll chat about a bit more later on. And there’s something really important to note here, and that’s that risk mitigation is an ongoing process. So you need to remain vigilant. You need to keep an eye on all the above risks, the ones I just mentioned, and any other risks which you may find are relevant in your organisation. It’s unlikely as I mentioned that you can mitigate them once or consider them resolved – you can never really fully remove risk, you can only plan for it and react to it according to your plan.

So, we’ve spoken about all these risks. Let’s talk about some of the rewards. We should of course ensure that the rewards in the mid to long-term outweigh the risks we just mentioned. And one of the primary financial benefits of modernisation is that, if done correctly, the initial short-term CapEx expenditure reduces some of your long-term OpEx. This is probably your biggest selling point to senior stakeholders. So as an example, as part of your modernisation, you will have increased your software delivery cadence, as you are no longer held back by legacy software when implementing those important strategic Features that you need to implement going forward; and it also means you will reduce operational expenditure that’s usually expended on hiring and maintaining technical teams who can support your services. So just to be clear on that, what I’m talking about is those support teams who are available 24/7, when something goes wrong and a user needs to call somebody. This process can also help you to de-risk deployments, meaning that your teams can confidently ship new features to meet your goals. And some of the techniques that we’ll discuss later are specifically designed to reduce defect levels and increase application stability.

One thing that I see evidence of, time and time again, is that removing the stress of dealing with fragile legacy applications leads to stronger, happier teams – and happier teams do tend to deliver higher quality work, more efficiently. I think the rewards are summed up quite nicely in this quote from our CTO, Luke Morton, which is: “Trade your fragility for agility”.

So hopefully at this point I’ve convinced you that modernisation sounds like a good approach to enhancing developer productivity in your legacy codebases, but the next question would naturally be – how do we actually go about doing this? Well, we have three different approaches that we think you should consider. Which one we would recommend would always depend on your particular circumstances. So let’s briefly look at these three approaches.

The first one is iterative modernisation. This is where we modernise our existing application, component by component. So we modernise the existing components until the entire application is modernised. Next up we have iterative replacement. This is about replacing the components of our application over time with off-the-shelf components, rather than things we modify ourselves. And finally we’ve got big-bang transformations – and this is where an entire application and all of its components are replaced in a single go. There’s a quote I like here when it comes to big-bang transformations and that’s that explosions are dangerous. And what this alludes to is that you cannot negate the risks of the big-bang approach, but you can mitigate them to an extent, by utilising smaller more controlled
explosions.

Now, since we mentioned trading fragility for agility – we should probably be agile in our approach here, and I think this is an important point to make, so don’t forget that the

iterative approach allows you to re-evaluate and pivot as you progress and learn. This means that if you were to choose one of these three approaches and find over time that it’s not working well for you, you shouldn’t be afraid to look into pivoting to one of the other two, or looking to even draw elements from each approach, according to your circumstances.

At this point it would be remiss of me not to mention our newest book at Made Tech, which is “Modernising Legacy Applications in the Public Sector”. We talk about these three approaches in quite a bit more detail in this book. And if you’re interested in those particular details of all three approaches, I would recommend picking up your free copy. I believe Jack will be sharing the relevant links at the end of this webinar.

So today we’ll be focusing on the first of these three approaches, which is iterative modernisation. Why is this? Let’s move on and find out. So what is it – iterative modernisation? I outlined it very briefly earlier. The strategy is all about reviving an existing application so that it’s maintainable, sustainable and able to evolve into the future. So you may be asking – why would I choose this approach? Well, the quick answer is that it tends to be both the quickest and the most cost-effective modernisation approach. The quickest because it builds on top of applications and technologies that your technical teams should already be used to working with, as these form part of your existing codebase, and that’s probably part of their day-to-day work. And the most cost-effective because if you build and maintain your own applications and services, you probably already have most of the tools and talent that you need to implement changes to them in your organisation; and because this approach allows your organisation to build towards technical self-sufficiency and rely reduce reliance on external parties – so those contractors and consultants we mentioned earlier.

It should definitely be noted here that this approach is really only feasible if your application is bespoke and that you have access to the source code, because we are specifically talking about enhancing developer productivity today. We’re going to work under the assumption that you do have access to the source code of your applications. To choose the strategy you need to really be able to see a future path that involves the continuing investments in and evolution of your technology beyond this initial modernisation stage. With this approach, your legacy applications and services should become easier to upgrade and enhance over time, and we’re looking to transform those legacy applications from shaky tech into a strong foundation that you can use to plot your path to future investment and growth.

There are some prerequisites of course. We’ve mentioned these a couple of times. So obviously, you have to have the right capabilities available in your organisation, or the ability to hire or outsource for them as needed. You should have access to source code for your applications and services; and you need to be able to give your team the time, support and budget that they need to achieve any form of modernisation.

Now you may remember that I mentioned our example organisation earlier, and they’d built that fragile tool themselves, which had evolved to run the entire financial side of their organisation. In their case, they agreed to take an iterative modernisation approach with this application. They built it in-house, so they had access to the source code, they had a few developers around who understood it, even though they were scared to touch it, and they needed it fixed quickly. It was causing issues and extra work for the Finance team on a pretty consistent basis. In fact, this poor finance team were having to manually double-check all of the figures that the application spat out, leading to a massive workload increase, and so the usage of this supposedly helpful application had actually become a time-sink for them. So for them, this was the right approach, and as I’m sure you can imagine, they were at the beginning of a pretty important journey.

So let’s move on to the most exciting part of today’s talk, and talk about a structured set of steps for implementing this iterative modernisation approach. The first step of six is all about setting your goals. So you need to give a lot of thought to the first application you choose to modernise. We would recommend spending time exploring the various applications across your organisation, and mapping those onto a chart with two axes, those being ‘business value’ and ‘technical effort’ as you can see on the image on the bottom right there. Your ideal target application should exhibit high business value and low technical effort to modernise. So, what I mean here is – say ‘high business value’ – I mean that the modernisation of this particular application should have a strong effect on your business goals and your bottom line. For example, is the application constantly in need of updates and improvements but running on a legacy codebase? And because of this it’s really difficult to build and release the strategic enhancements that you require. And by ‘low technical effort’ I mean that you should aim for target applications that you can realistically enhance and update with the time and resources available to you. I would refer to the two of these together as low-hanging fruit. One of the benefits here is the cost/value ratio. So we mentioned CapEx risk a bit earlier, and this step can help to mitigate or minimise that a little bit. With that low-hanging fruit, the applications that you end up modernising should hopefully give you big benefits with the modernisation, with the lowest possible cost.

So, once you’ve chosen this application, you should initially be focusing on stabilising it, with a view to making it maintainable, and that’s to allow business as usual to proceed; and supportable to enable a small support crew to effectively look after it and respond to any issues.

And once you’ve achieved that first goal, you should be looking to make it enhanceable – and what I mean here is that you should make it so that modifying your application is necessary to allow future changes and enhancements – should be a relatively simple painless process. A quick note on that we will be talking about some techniques for stabilisation and enhancement in a minute.

So, importantly here, you have a team that needs to implement all these changes for you, and you need to make sure that they’re on board at this point. As I mentioned before, open communication is key, and you shouldn’t assume that people understand why you are taking this approach, You should explicitly make this clear to your teams. I find that if you explain why to smart people – why they are doing things – it gives them direction and fires them up to get things done. And it also ties into our earlier point on continuous alignment, so these conversations will help to keep everyone involved in this process on the same page.

I should also mention that you need to maintain a level of empathy here, especially for your development teams working in legacy codebases, especially when they are heavily used with constant change demands it’s really really hard. They probably have existing ongoing commitments for supporting and updating the chosen application, and they may even be doing other work and support tasks over and above their day-to-day delivery. So one thing we find is that, as developers build a relationship with their users, the users tend to start coming directly to the developers for support. It’s not ideal, but it happens. So what I’m saying here is that your development teams may have been playing other non-official roles in the codebase, and they may be supporting other teams that use this application. In other words, their workload may be heavier than you appreciate.

I mentioned this a little bit earlier, but I want to mention it again – It’s important to note that if you’re going to give your development teams ongoing time to focus on modernisation as opposed to carrying on building on those shaky foundations, you need to protect them and help to defend their time. So you need to help them to push back on all but the most urgent requests and explain to your other stakeholders in your organisation that this short-term drop in productivity will lead to a long-term increase in their delivery cadence.

So we should speak about our example organisation here. Their main goal was fairly simple to define: They needed to get that patchy financial application stabilised, and to get it matching up to the Finance team’s manual work that they were having to do to double-check it. This would allow the Finance team to eventually stop having to double-check everything and to finally rely on the application with confidence. The example organisation also wanted their in-house development teams to be empowered to take this thing on themselves. You can’t outsource the consultants for the rest of time. And so their internal development teams joined us on every step in this journey.

So let’s move on to the second step on the iterative modernisation process, which is starting with an audit. This step is about auditing your chosen application in its current state to establish a technical baseline a bit more, with as much clarity as possible. And to be clear, I’m not talking about an external audit here, I’m talking about an internal discovery process to understand the technical elements of the application you’re planning to modernise. While this step is focused on a technical audit, not all of your data will come from just looking at your code. You should consider getting hold of any documentation that you can, even if it’s out of date. You should also try to interview as many existing and past team members as you can – and I don’t just mean technical team members there – I’m also talking about domain experts, business analysts and so on – anyone who is involved with this application over time. You never know what nuggets of information may emerge if you do this, and I would say in this case it’s better to have more information than less.

From a technical perspective, there are a couple of things you need to understand here. So first up is architecture. This is all about how the application is glued together. Is there a defined architecture, or is everything a bit ad hoc, or different areas of the application architected in different styles? Then you need to look at the code complexity – so are you looking at hundreds of lines of code or hundreds of thousands? Are you looking at a mess of spaghetti code, or is there some sort of structure evident in the codebase?

Very importantly you need to look at your data – so your application probably lives on top of some sort of data. You need to find out where this data lives, how it’s accessed, to figure out whether there are quirks or inconsistencies to the data model that you need to account for. And very importantly, you need to understand if there’s secure data involved – so things like GDPR and Data Protection Impact Assessments are those things looming on the horizon.

And finally for technical stuff, you need to look at the integration points. So, is this application integrating with other applications to send and receive data? Does it expose integration points itself that other applications call into, and who are those applications that rely on these endpoints? This audit step will help you to assess a risk we mentioned earlier, which is possible technical blockers to your modernisation, and will give you the baseline data to plan a path to your desired state.

In our example organisation, one of the main things that we found was that the same piece of code was repeated all over the place in the application. This piece of code was used to fetch the base monthly finance data, with minor tweaks here and there for edge cases, and sadly there was no simple way to throw test data at this code, as it was so tightly coupled to all the reports and everything that were calling out to that database. It was pretty poorly written as well, and filled with those sort of hard to detect heisenbugs that appear to the users, but when you try to simulate them you simply can’t. Luckily in this case the data source was fairly predictable, but it was held back by this fragmented application design calling out to it with that repeated bug-ridden data access code everywhere.

So at this point we had our audit step complete. We understood all of the above – the architecture, the code complexity, the data and the integration points. Most importantly was the data in this case, and we had enough baseline information to understand how the application tied together in some of its quirks – which allowed us to move on to the next step, which is the implementation of a test harness.

Now, this is where we get a little bit technical, and also where I have to try my best not to get over excited because this is my favourite part. In a perfect world, the application we’re modernising would have been built using test-driven development aka TDD. If so, we would hope that these tests would both describe the behaviour of the application, as well as alert us to when new changes we make have broken this behaviour. But sadly this is unlikely in most cases. We don’t live in a perfect world, and it’s rare to come into a legacy application that has automated tests included.

So we need to find a way to describe the existing behaviour of the application and test our assumptions about it. We do this by getting the application into a test harness. And what I mean here is that we start working to add a testing framework and some broad characterisation tests to our application. So, simply put, characterisation tests characterise the existing behaviour of the application. They set the existing expectations for the current behaviour. This enables developers to create a bit of a safety net for themselves – to give them – or I should say to give us – confidence that we aren’t breaking existing behaviour when we make changes. And it also gives us a code level way to describe this behaviour to future developers who may join the project. It also allows us to keep describing this code, this behaviour as code, as we change the application in the future. So it’s important to note that we should be extending this test coverage as we go along.

This process is definitely easier said than done, and later on I’ll show you a good resource to help your development teams get cracking here. It’s something we’re pretty passionate about at Made Tech, so please don’t be shy to reach out to us for some advice and guidance here if you need it. I will personally talk to you about this until I’m blue in the face, if you give me half a chance!

We should talk about our example organisation here. So in their case, this step involved wrapping each financial transaction component in a high level automated characterisation test. To start with, these tests pointed at a cloned copy of their testing database that we knew wouldn’t change, and had fairly realistic data in it, so this meant that we could at least get predictable data to use in our tests. This also meant that we could find examples of where data either wasn’t processed correctly in the code, or examples of where certain types of data cause the app to crash or throw out funny results, owing to logic errors and bugs in the codebase. You may remember earlier I mentioned that the data itself was fairly stable, but the way the code used it was a bit of a mess.

So we got this test harness implemented, and at that point we looked at an optional step which is available to you in this process, which is the re-platforming of the application. So I mentioned this is optional. You may initially just want to focus on the modernisation of your codebase and not necessarily on moving it to a new platform. But if a cloud migration is a key driver for modernising your application, or is something you’ve been considering anyway, then you could consider including it as part of this modernisation effort. Moving to a cloud platform gives you several key benefits, including but not limited to – because there are a lot – large potential cost savings when compared to your traditional on-premise hosting; outsourcing a fair whack of your infrastructure maintenance to a big provider like Amazon, Microsoft, Google, via that shared responsibility model; you do get built-in redundancy and backup offered by these services, and they tend to have pretty good monitoring tools included as well. And typically you will be able to – if you use these services – script and automate your deployments of new changes. This list goes on and on and on.

So if you are doing this, there are two things in that big list that we’d recommend focusing on, and those are automated deployments and monitoring practices, as part of your migration. These two techniques are both helpful in enhancing developer productivity – so automated deployments are about helping developers to manage the complexity and cadence of releases by doing a bunch of automation work upfront. What I’m talking about here is them doing a big batch of work once to automate the deployments, and then leveraging this work going forward, hopefully for the rest of time, with just some minor tweaks here and there. And then I mentioned automated monitoring – so this helps technical teams to detect and resolve issues early. And what it also does is it provides some guidance on areas of your application that currently experience issues, which can help to direct and focus your application improvement efforts. I’m not saying these things aren’t possible in a non-cloud platform – I’ve just in my experience found that they’re easier on the cloud platforms, out of the box.

So let’s look at our example organisation here. While they were considering re-platforming, it was a future plan and not scoped at this moment. They were still in the phase of understanding cloud platform providers and implications of re-platforming, and so at this point they decided they just wanted to modernise their finance app rather than shift it to the cloud. This is a subject for an entire talk and several books all on its own, so I’m going to keep it pretty light-touch and move on to step five of six, which is refactoring your application.

So this is another technical step, but we’ll talk about it pretty high-level here. So what is refactoring? The Agile Alliance defines it as “the process of improving the internal structure of an existing program source code while preserving its external behaviour”. I absolutely could not put it better myself. This is the second to last step in this iterative modernisation process. So this step allows your development teams to do the work they need to do to enhance the structure of your chosen application, so that its codebase now supports instead of blocks future development efforts. At this point, based on the previous steps, you should have sufficient test coverage to give your teams the confidence that their changes aren’t breaking existing behaviour.

So now they can focus on cleaning up that spaghetti code that drives them crazy and slows them down every day. This allows them to start following common coding conventions and patterns, and to choose a common set of structures and metaphors to bring consistency to their codebase. They can also update all dependencies at this point, and make use of the new features in these updated dependencies, and generally make their codebase more maintainable and easier to enhance over time.

Little piece of advice here: If you’re inexperienced doing this, it’s very tempting to go big-bang on refactorings. As a developer, it’s really hard to keep all the details of a massive refactoring in your head. All it takes is for someone to tap you on the shoulder and you lose your entire train of thought and have to start over sometimes, with several hours of work or thinking lost. Experience tells us that small, structured refactorings add up to big enhancements over time. As your development team progresses in this refactoring, they may find that there are additional areas of your application that need to be brought under test, as things they may have inevitably missed when implementing the test harness. You should let them do this, absolutely, doing this only serves to enhance the quality and robustness of your codebase, which is kind of the point of this whole step.

In our example organisation, this was awesome. So one of the strongest refactoring tools we had in this case was a technical pattern called the repository pattern, which we used to extract and centralise that repeat piece of code, that component everywhere in the app. We would then pass this into the code that needed it, rather than expecting all the components to individually understand how to talk to the database. This also allowed us to run our automated characterisation tests that I mentioned earlier, without having to connect to a real database. So we could simulate any data scenario that we wanted to, without having to go find or manually create examples in the database first. At this point we were able to fix various bugs, safe in the knowledge that the shiny new test we created would probably catch any breaking changes. And the end result of this process was that the Finance team’s extra manual calculations finally matched what the application was telling them, and so we could remove that burden of the manual work from that team. And as I’m sure you can imagine, this was a massive win for both the organisation and its development teams.

Once this step is complete, and your development teams are happy with their now tested and refactored codebase, it’s time to get back to adding new value. So at this stage, your team is empowered to add new functionality to your application with confidence. They will have a structured, well-understood codebase to work in, and they will have the safety net of those tests that they added, to keep an eye on any defects that they may introduce.

They can go back to the sort of things at this point that development teams should be focusing on – things like adding new features, enhancing existing features, fixing the bugs that inevitably crop up from time to time. I think it’s incredibly important to note here that they should continue working with those new behaviours that they learned in the previous steps in this process. So they should keep adding automated tests when necessary, to ensure the robustness and quality of their codebase, using a test-driven app development approach if possible. Once again, don’t get me started and then I’ll talk to you about it forever! – and they should be able here at this point to keep refactoring with confidence, to make for codebase a better entity to work with over time.

In our example organisation, as I mentioned, getting to this step was a massive win. So they were finally able to start adding more financial reports to their application in a simple, predictable manner. The team had started to learn to use test-driven development practices in their codebase, to add automated tests upfront, and they were able to keep refactoring with confidence, owing to all of their shiny new characterisation tests. At this point that in-house team that had come on the journey with us were able to fully take over the codebase, and quickly add new high quality features themselves. In other words, they’d been empowered to deliver with confidence again. And from a developer’s point of view, the main goal had been achieved. So it was a pleasure to work in the codebase, it was easy to add high quality new reports and features, in other words developer productivity levels had massively increased. That was a happy moment in everybody’s life!

Now, I mentioned earlier that I’d recommend some additional materials for your development teams, should they be interested, or should you be interested in getting involved in this process. So let’s quickly look at some of those additional reading resources:

First up is Michael Feathers’ book, “Working effectively with legacy code” – This is, in my opinion, a tome of great wisdom, and many proponents of TDD will recognise this book straight away. I mentioned the quote earlier – “legacy code is simply code without tests” and this book is the source of that quote. So if you’re looking to learn code level techniques for bringing legacy applications under test, this book is a great first port of call.

Next up, we have “Refactoring” by Martin Fowler, along with Kent Beck. Martin Fowler and Kent Beck are kind of two of the superheroes, I would say, in the developer world. Martin Fowler starts out with a pretty cool anecdote from when he first met Kent Beck. Mr Fowler was working on a heavy old legacy application when Mr Beck joined the team, and Kent insisted on continuous cleaning up of the code, using refactoring as a baseline process. Martin Fowler describes this as being one of the major reasons why they were able to bring this codebase they were working on under control, and if you remember, this was step five of six of our transformation steps. The majority of the content in this book is focused on specific refactoring techniques that developers can apply in their codebase, which are also inherently techniques for avoiding the legacy trap in the first place.

And finally, we have “Test-Driven Development By Example” by Kent Beck. I really like this quote from the book, that “Test-Driven Development is a way of managing fear during programming.” So this quote is referring to how the TDD approach helps you to build up a justified level of confidence when you are focusing on implementing difficult domain concepts in a codebase. Kent Beck is considered the modern father of test-driven development, and this book is a really good starting point for those of you who may be new to the technique.

So between those three books, I believe that a wealth of knowledge and techniques is available to you and your development teams if you’re taking this process on. If you didn’t catch any of the names, please feel free to drop me a line on Twitter, I’ll happily send them on to you.

Finally, let’s just look at the key takeaways from today. So first up, modernisation is a great way to enhance developer productivity in your legacy codebases. Iterative modernisation, that approach of the three, works well in existing codebases, and should be your first port of call if you have access to your source code. You should always remember that risks exist and that mitigation is important, but that the rewards from this process make it well worth taking those risks. And that it’s important for any big process to follow a well-structured Approach. Once again, if you’d like to read more about those steps in more detail, I recommend picking up a copy of Made Tech’s new book. And finally there are some great technical reading resources out there on the subject, that can really help your teams get working effectively with legacy codebases.

And that’s it from my side. I hope I was able to give you some useful information today, maybe inspire you to start considering a legacy transformation in your own organisation.

Handing back over to you, Jack.

Jack: Brilliant. Thank you very much for that, Scott. If you’re ready, I think we’ll go straight into our Q&A. First question: “How do you prioritise which tests to build first? It feels like we’d never be done with this!”

Scott: That’s a good question. I would say you probably need to find your most business-critical features in your application, and focus on those first. Start broadly, so don’t focus on diving into the intense technical details – what we would call a unit test. Look at broader characterisation tests. Because you are – if you follow this process – refactoring and changing your application, component by component, you need to find a balance where the components that you are pulling out and replacing (and thus are having to add tests for) are small enough to make this a realistic process.

Jack: Awesome. Next question: “Any successful techniques initiating app modernisation when company culture is the biggest roadblock?”

Scott: That’s a difficult one. What I tend to do – which is very naughty – I tend to start quietly refactoring and adding tests in the background. One of the hard things about being a developer is that it’s a technical role, and you’re not really expected to have sales skills, but I would recommend building a bit of a case, and being able to present that to your senior owners in a structured manner. So don’t be afraid of things like slide decks and measuring where there are problems as you go along, and being able to track that data. You can’t just go to senior owners and say – hey I want to do this – You need to come with some seriously strong justification. You need to be a bit salesy about it sometimes, and I would definitely focus on money as the first port of call. Unfortunate to have to say that. If you can save the company money by following this process, you’ve got a good case already.

Jack: Excellent. Next question we have: “What kind of criteria could you apply to decide when not to migrate a legacy application? Expertise is often defined as knowing not only when an approach is good, but also the limits of its application.”

Scott: That’s a good point. So I would say that you should always be considering those two axes we mentioned earlier in the planning step, which are technical effort and business value. So if there’s a massive amount of technical effort required to modernise a legacy application, and that application is happily running day-to-day with few change requests coming in, maybe it’s best to just leave it in place and focus on that low-hanging fruit. That being said, once again, if you’re in the public sector you should bear that GDS mandate on legacy migration in mind, because you may not have a choice going forward about some of your migrations.

Jack: Okay. “What about the risk introduced by complexity of not only having a source application and a target application, but also the complexity of the transitional systems between the two? This involves also migrating the business process in that the application is embedded into.”

Scott: That’s another good question. So your audit step should hopefully pick this up for you. If you dive deeply enough, you should be picking up these integrations and these processes that are involved with your legacy application as early as possible, so that you can plan for them. What I would recommend here is that test harness step. You should make sure that that step is covering that integration and that process that your application, your intermediary let’s call it application, is involved in. There’s an architectural style here – this maybe is getting too technical, but very briefly – called ports and adapters, sometimes called hexagonal architecture, that can be used to effectively separate and test the various applications as they integrate across the line. You might want to consider refactoring towards that, to reduce that intermediary application risk.

Jack: Okay. “Do you have any advice on methods to persuade management to authorise writing tests, when they can’t see any tangible benefits to the business?”

Scott: Yeah, I would say your first step would be to… next time a bug comes in, something goes really wrong, I would focus on wrapping that in a test, and then executing, making the code changes and then being able to run the test, to say “look, this code now works” because the test is passing. You can show that to your management and say “look, this is never going to happen again”. So you’re never going to have that phone call at 3am – the system’s down for this particular issue – because we’ve now guaranteed it by running this test. You have mitigated that risk as much as possible. That I think would be a good first step. You’ve got to be careful here, I mean I know you have to be a bit sneaky sometimes to get the stuff in. I don’t like the word sneaky, I can’t think of a better one. Don’t put your career at risk based on something you heard on a webinar please!

Jack: “You used the phrase ‘legacy trap’ – what do you mean by this?”

Scott: Right, so when I talk about a legacy trap, I’m referring to that sort of series of decisions from both a business and a technical point of view, that have eventually led your applications to become legacy applications. So, that may well be things like – oh we don’t need automated tests, that’s a waste of time. One of our Tech Leads, Tito, did a great webinar on avoiding the legacy trap a while ago. He’s got some pretty great tools and techniques in there, so I’d recommend checking that out.

There’s just one thing I thought of, Jack, from that previous question about convincing management. There’s a graph – I’m going to try to describe it with my hands, forgive me if I do a bad job! – that talks about the delivery cadence of features when it comes to test-driven development. And what it shows is how an application without tests, initially delivery is really really fast, but what happens is that levels out over time and starts dropping, because as you add more complexity, other parts of the application start breaking. Now with TDD, what you have is a slower initial delivery speed, but over time you enable faster and faster delivery because you can add more sections or you can add more functionality later, without breaking that previous functionality, because it’s covered by tests. I think it might be on Martin Fowler’s blog, I’m not 100% sure, but a quick Google search should find that for you.

Jack: Awesome. “Do I need domain experts from my organisation to do a legacy transformation, or is this a technical task?”

Scott: You absolutely need domain experts. So this could mean senior stakeholders, it could mean your end users, it could mean business analysts, anyone, the list goes on, right. So in the audit step, you should be reaching out to as many areas of your organisation as possible, to get the deepest understanding of your applications and their impact. This gives you that strong baseline to work from, from both a technical and domain perspective. Even if you’re the owner of that application in the organisation, or even the owner of the organic organisation, there may be bits in there that you’re not aware of, bits that you don’t know that users are using your application in this particular way. You need to get as much information as possible upfront. You may find that some of it’s not useful to you, that’s a risk. But you may find nuggets in there that completely change the course of your transformation process.

Jack: “When is it not appropriate to use test-driven development? Do you think refactoring should be separate work from value delivery?”

Scott: So I hear two questions there. I’ll address them separately. The first was “when is it not appropriate to use TDD?” – So I would say you should use it whenever possible, but I am a bit crazy about it! I would also try not be a zealot about it wherever possible. I need to be more self-aware about that myself sometimes. TDD is about testing code behaviour, and so if you’re not able to add tests for behaviour specifically in any particular area for any reason, you could consider not adding tests there. You don’t want to be adding tests for the sake of tests, and not focusing too much also, if you can avoid it, on things like on metrics, like code coverage for tests. They don’t always tell the whole story. You should only really add them when they actually add value to your application. I would advise using caution in those cases though. Think carefully about whether you could add value with the test, and if so, add it.

And then the second part was about refactoring. I think the question was “is refactoring a separate step from value delivery?” – so in that process we recommend refactoring is described as a discrete standalone step initially, so yeah, in that case it is something separate from value delivery. Even though you’re adding value by modernising, it’s not something that’s part of your day-to-day feature delivery, but going forward from that you should definitely be refactoring as you go. And if you’re if you’re in management of any type, you should be allowing your teams to do this. Remember that you’ve got those tests to allow you to give you confidence that you’re not breaking existing behaviour. Those characterisation tests you should be adding in the in the transformation process, and any test you add going forward.

And now you have this ability to refactor, and if you follow this process, hopefully your teams have developed those skills – that’s enhancing the quality of your codebase and allowing your teams to execute more efficiently. And so it should be encouraged. You can almost see it as a part of the process of dragging yourself out of that legacy trap.

Jack: Brilliant. I think we have time for just one last question. “What if we are using super legacy apps such as those written in COBOL – how do we then approach modernisation?”

Scott: I would say with great care! Also I’m stealing that phrase – was it super legacy apps? – I’m adding that to my repertoire. I worked for a bank once, back in the day. As I’m sure you can hear from my accent, I’m South African, so this is when I lived back in South Africa. They were running on a fully COBOL-based mainframe, and they were approaching their modernisation – even though they didn’t verbalise it using the language we’re using today -using the iterative replacement strategy. So they were replacing entire sections of the mainframe at a time with Java-based components. So you could approach it that way if you are feeling extremely brave, then you could consider a big-bang approach as well. In either case I would I would suggest a strong focus on automated behaviour testing from the edges of the applications. What I mean is not diving into the internals, looking at the overall behaviour that the application exhibits to users or APIs or whatever, and making sure you add tests for that. And then also when it comes to those very old systems, you need to be very careful with your data sources, because what does tend to happen is over time as those data sources evolve – random columns are added, some aren’t used anymore after certain dates, there can be all sorts of gotchas there. So things can get a bit patchy, and you need to make sure that you fully understand that whole side of things. That can come from interviewing people who work on COBOL – if we’re using COBOL as an example – and from actually sitting and diving through that code.

Jack: Brilliant. Well I think that’s all we have time for in the way of questions, so I’m just going to wrap up and say a massive thank you, Scott, for taking the time to speak to us today.

Scott: Thanks Jack.

Jack: And also a huge thank you to all of our attendees this afternoon. As I mentioned at the very start of the webinar, we will be sending out feedback forms. They take a minute to fill out and they go a really long way to helping us improve our future events. Our next webinar is coming out on the 30th of September, so next week, and will be on delivering a user-centred NHS virtual visit service. During COVID our speakers will be Adam Chrimes, the Senior Developer at NHS Digital; Jessica Nichols, our very own Senior Delivery manager here at Made Tech; and Ian Roddis, Deputy CDIO at Kettering General Hospital NHS Foundation.

As I also mentioned, we have our free ebook that’s available on our website at the moment: “Modernising Legacy Applications in the Public Sector” – You can get that on our website if you don’t already have your copy. And then also if you want to stay in touch with Scott or you didn’t get your questions answered this afternoon, Scott’s Twitter handle is up on the Screen, @Scotteza. And if you want to stay in touch with Made Tech or just keep the update on everything that we’re up to, we have the details of our socials up on the screen now.

And with that, I wish you all a wonderful afternoon. Take care.

[recording ends]

Back to the episode