Transcript of "Avoiding the legacy trap: How to ensure your legacy decisions aren’t holding back your modernisation"

Jack: First off, good afternoon and welcome to our second Made Tech talks webinar: “Avoiding the Legacy Trap: How to ensure your legacy decisions aren’t holding back your modernisation.”

Our speaker today is our very own Lead Engineer, Tito Sarrionandia.

Now before I hand over to Tito, I just want to run through how today’s going to go: Tito is going to begin with a 45 minute presentation, when I hand over, followed by a 15 minute Q & A. So please make use of the Q & A function found at the bottom of your screen, and we’ll endeavour to answer as many of your questions at the end as possible. After the webinar, we will be sending out feedback forms to all of our attendees. They take about one minute to fill out, and it goes a massively long way to helping us improve our future events.

We’ll also be sharing some information on our next Made Tech talks webinar, as well as a little bit of info on our latest book launch, coming out tomorrow, “Modernising Legacy Applications in the Public Sector”.

Just a quick reminder this session is being recorded and does come with subtitles. Instructions on how to get those are going to be found in our chat function.

And without further ado, I’m going to stop sharing my screen and hand over to Tito. Tito, if you want to take it away.

Tito: Thanks very much Jack, and thanks everybody for turning out on this sweltering August day to indulge me. Let me just share my screen.

So, legacy. We spend a lot of time thinking about legacy at the moment, especially if you’re involved somewhere in the Public Sector. It shapes a lot of the thinking that we can do about where our services are going, how we spend money and the kind of tasks that we get to do as digital teams. I’ve seen some successful and some unsuccessful attempts at trying to tackle that problem, and so what I want to do today is to share a bit about – how do you really fix legacy? – by which I mean replace it with something that actually solves the problems you had in the first place, and hopefully get across to you that the answer to that question can only be partly technical. So it’s also a people problem.

So before we talk about any of that, let’s quickly dive into – what even is legacy? I started out by trying to find – what is legacy? I looked it up on Google images. That told me absolutely nothing. I got back that image there. So, what else did I learn? Engineers often use this example, this definition which is legacy code is code without automated tests – that comes from Michael Feathers. I like this but we think about more than code here, right? We’re talking about systems, you know. Maybe you own the code, maybe they’re just procured off-the-shelf systems though. And maybe we can talk about something like networks. It’s something with an unsupported tech stack, so ‘unsupported’ – maybe that literally means the manufacturer doesn’t support it anymore – they’re not releasing security updates, or maybe it means it doesn’t fit in with your digital strategy, so you don’t want to be supporting technology like this.

I like this one so it’s – Old, in a bad way – right, so there’s some tech that’s old but it’s fine. Like the web. But this is old in a bad way. I think this is pretty important, so it’s got to be something that’s still meeting a critical business need. So it’s something that – even though it’s got all these problems, that we’re saying that it’s old in a bad way – it is still used, right, and we can’t just pull the plug and expect everything to be fine.

I like this definition as well which is that you know when you see it. It’s like art – it’s hard to come up with an overall definition of like ‘what is this thing?’ but if you go to any organisation, you say “where’s your legacy?” – people can typically point at it.

But ultimately it’s probably a political tool. What do I mean by that? I mean that it’s a label that we use to get a particular result rather than to describe something in a strict scientific taxonomy, and this idea is usually familiar to educators. So before I worked in software, I was a schoolteacher, and labelling children is something you have to do as a schoolteacher. So for example, dyslexia. So dyslexia is obviously a diagnosable medical thing, but it’s also a signal to unlock the right funding, the right support, the right environment, the right staff time, for a student to do their best. Schools often also maintain a register of gifted and talented students. Clearly, that’s not a medical term, but it’s still a useful label to use.

So, ‘legacy’ then is a powerful signal to the organisation. You slap the legacy label on something, and that activates like an organisational immune response. It says – we want to deal with this bit of technology differently to how we deal with the rest of our technology. And, you know, I think often that response is flawed. We’ll talk about that a little bit, this comes in this presentation as well, but ultimately you’re saying treat this differently, right? We want to deal with this somehow.

So why do we use the label? There’s a bunch of reasons. Often we use this label because something’s just too expensive to maintain, so maybe that means you’re paying over the odds on license fees for something you can’t change, or maybe it means it just takes up too much of your valuable time, or maybe it means you have to temporarily import some super-niche skills into your organisation.

We use the label when technology is too hard to change. You know, when we think things will be better if we could change this in that way, but it’s too hard to do, and so we’re scared of doing it or we delay doing it. We use it when the risks are out of control, when something keeps breaking and stopping people from doing their jobs, or landing you in the newspapers. We use it when no one wants to work on it. It’s again a pretty common problem in a larger organisation that’s got a mixture of legacy and strategic technology, is that it’s really hard to persuade people that this is going to be a fun challenge for them. Maybe it’s not what they signed up for.

We use it when it doesn’t reflect the way that we want our teams to look. So in the government digital services language, we often want to organise our teams to solve the whole problem for our users. That’s really hard to do. If this technology was built in a different paradigm – maybe this technology was built to sit between two other pieces of technologies, or maybe it was built using a front-end/ back-end strict separation methodology.

And there’s also this last reason, and this one’s not coloured in like the other ones because I think it’s not a good reason, but nonetheless it is a reason that I often see that people want to use this legacy label – and that’s that it doesn’t look good on their CV! So people are often really keen to call something ‘legacy’ if – for example they know that having Kafka on their CV is going to look better than having some 20 year old IBM service bus. And of course, fixing legacy is really expensive, so we know that it often takes a long time. We know that often means you need, temporarily, even more specialist skill-sets to come into your organisation. We know that transitions can go wrong, and so you end up having to do a lot of contingency planning around those transitions.

So, given how expensive it is to fix ‘legacy’ – it better solve a lot of these problems, right? Well, actually, no. I think that we can reliably solve these two problems – that nobody wants to work on it, and it doesn’t look good on our CV – but very few organisations can confidently say every piece of modernisation that we do is going to result in something that’s low-cost, easy to change, that the risks are all managed and mitigated, and that it reflects how we want our organisation to operate. These problems are harder to solve, and that’s the motivation for sharing some of these things with you today.

So when you’re transforming your legacy, then what are some of the levers you can pull? So by levers I mean inputs to your delivery process. So there’s quite a few:

So there’s obviously the tools: So, is this going to be in Java or is it going to be in Ruby? Is this going to be an off-the-shelf product or something we build ourselves? Are we going to use an Oracle database or an open-source database? There’s also the problems that you use as your inputs to your team, right? So are you feeding your teams problem statements like – you need to build an API with these particular inputs and outputs; or are you letting your teams solve problems in a cross-functional way, by saying things like we need the service to do better for citizens over the age of 40?

There are the skills: So, who do you have in your organisation? Can you hire or can you train extra skills into your organisation?

There’s governance: So, how do you check that things are going okay?

And there’s leadership: So, who are your leaders? What are they doing? Are they more ‘command/control’ leaders or ‘servant’ leaders?

These are all the inputs to our problem, and I think this is the thing that I’m trying to persuade you to avoid. This is the unambitious modernisation where we just talk about the tech and we don’t look at the rest of it. And, when we do that, these boxes that are still in grey, the things that we haven’t touched, that’s what I’m calling our legacy decision-making, so decisions that are made in a legacy way.

I’m going to work through an example. An example is: How do you target support as a service? So, what do I mean by that? Pretty common scenario in local and central government is that you’re tasked with providing some support to citizens. So a couple of examples that I’m familiar with – The Student Loans Company. They support students through giving them loans and grants. And there’s council housing, so that supports people to have an affordable, decent place to live. And if your organisation is doing that kind of support somewhere, it’s probably got a long history – or you might say legacy – of trying to answer that question: How do you best target your efforts? How do you target the support that you’re giving?

There’s a brilliant document published in Scotland by the Scottish Legal Aid Service, where they’re reviewing the Legal Aid system up in Scotland. What they say is that in moving forward there is a careful balance to be struck between simplicity, flexibility and fairness. And I’m not gonna read the whole quote out, but they’ve got a really great sentence there, which is they say that “Complexity has been driven by fairness.” And I want to dig into – what does that mean? I think it’s a pretty common thing that we see across governments. It’s a really nice encapsulation of what we can mean by a legacy decision-making process. So after reading that report, this is what I think is happening with that particular service.

So the Scottish Legal Aid Service has a goal, and that is to increase access to justice in Scotland. And, naively, you might think – well I guess to do that, you could just take everybody going through the legal system in Scotland and just give them some money, divide it by the number of people that need it. It would be super easy to administer, of course, but in reality the problem space is way too complex to do something like that. So, for example, what if a legal action isn’t in the public interest, or what if some people need more help than others? Clearly, you know both those things are true, and so you need to get better and better at targeting the support you offer, and you end up with a process that looks like this:

There’s some a theory – or report or review – about how you might retarget resources to increase access to justice further. That goes through some process of ideation – where people think about well, how might that interact with our current policies and how might we give citizens access to something like that. Then a decision is made. Again, in the Public Sector, that decision might take the form of policy. And then an edge-case is produced. An edge-case is in… previously this was the flow, and now we’ve got this little sub-case here, where we’re going to treat it a little bit differently, and you end up with a process that looks like that.

But let’s keep drawing on this diagram, and think about: What does that mean for systems? So, firstly where are your digital teams? Chances are they are here at the edge-case, so the input to the team is a policy or a decision, and everything else is upstream of them. These digital teams might be Agile teams – as in they are designing, implementing, learning, iterating – but what they’re learning is: how do you best implement the decisions to the left that are upstream?

A lot of this is on the left/right. So, all of the ethical aspects of this as the left of the digital teams – often for good reason – there might be a democratic decision in there, like an election manifesto pledge or something. But there are absolutely operational implications, snuck in on the left. So – who’s doing the theorising, the ideation and the policy, and can those people accurately predict and comprehend the totality of how complexity in your service changes when they throw you yet another edge-case? And could a simpler decision now mean that the next decision is implemented faster? And, put simply, that means that the people with the power to achieve that balance of simplicity, fairness and flexibility – that the review into the Scottish Legal Aid system suggested are all on the left; but the people with the operational knowledge to make that happen are on the right. And this is what I’d call a legacy decision-making process, and this is the result.

So, I think that – reading that report – it sounds like there’s an over-focus on fairness at the expense of the other two factors they mentioned, just simplicity and flexibility, which leads to complication, cost and rigidity. Strongly recommend checking out this document – “The Independent Review of Legal Aid in Scotland” from 2018.

The pipeline guarantees this balance, and this is the result of that. This is what happens when the input to your digital teams is just edge-cases, it’s just policy edge-cases. I don’t know if this looks familiar to anybody, but where you try and draw out a problem space and you end up with a lot of systems, a lot of arrows going between them, and then you start thinking about – where do we start? What’s the bit that we can change first? – and you point somewhere, and then somebody pipes up and they say – Oh no, you can’t change that because there’s this policy, back in 2008, that says this. And then you point somewhere else and somebody else says – Well, you know, I don’t know what that does, so I don’t want to touch it. And then you try somewhere else, somebody says – Well, another team asked for that, I don’t know what it does, but it seemed important to them!

This is what happens when your input is edge-cases. So, if you’re in a scenario like that, would this be an acceptable digital response to that review? Well, clearly not because the decision-making process is untouched. What’s the new system likely to look like? If you just touch the tools, it’s highly likely to have many of the problems of the old ones, of the old system. You know you’ll probably fix some of the problems, but you certainly won’t fix them all, and if you’re spending that much money, that’s a wasted opportunity, money or effort.

So we want to talk about some techniques – I’m going to go through four – but some techniques for introducing your teams to the idea that they can deeply challenge this. And so it’s not all bad news, right. There’s stuff you can do, and maybe hearing that process now, especially if you’re working in digital, makes it feel a bit hopeless. Do you think – well so much of this is outside of my control that even if I’m in a very senior position, what could I possibly do to change it? But actually that’s not true, there is stuff that we can do. It’s about making sure that we’ve got a balance of techniques in that – modernisation, not just techniques – thinking about our actual tech, like programming languages databases, hosting; but also making sure that we’re introducing techniques that tackle our decision-making head on. And here’s a few, it’s an illustrative example, and I’m gonna go through four of them.

So the first one is hypothesis workshops: This is a technique to figure out – what’s the actual point in the stuff that we’re building? And I’ve worked through an example here that we did at the Ministry of Housing, Communities and Local Government. I see some people from there on this call – hello! We ran this session slightly differently, I’ve refined the format a bit, but I’ve used our outputs to illustrate it.

So, what do you do? So you take a big wall and a bunch of sticky notes, and you start by putting your goals on the right. Our goal here is to reduce the CO2 emissions from the UK housing stock. If you’ve got multiple goals, like more than three, I’d probably consider running this session multiple times with a smaller set of goals each time. So you’ll have a small set of goals on the right there, actual goals. So I think a ‘goal’ is the impact you want to have on the world, as opposed to – we want a new database.

And then on the left put down the things that you know that the legacy system supports. You might call these ‘features’. Maybe that’s too low level, and maybe you want to do something like ‘journeys’ or ‘services’. I think you’ll probably have a hunch about what works for you. And often when you start a legacy project, you’re told – here’s all the stuff that it needs to do, this is what it’s done before – and actually none of that stuff has been validated in perhaps 10, 15, 20 years. And what this lets you do is revalidate that, and talk about how you use the outputs from this as well.

So what we’re going to do then is we’re going to draw the series of events between the feature on the left and the target on the right that have to happen for the thing on the left to achieve the result on the right. And we’re going to be really specific here. So the point of this system is – it was the energy form at the Buildings Register. And in a nutshell, what it does is – it’s a large register of buildings and how energy efficient those buildings are, and the overall goal is to use that data to get a grip on the net zero carbon emissions targets in the UK. So there’s a whole bunch of things that it supported currently, and we’re going to walk through a couple of them. Broadly, you can use it to find somebody to come to your house to do one of these assessments; you can use it to see what kind of assessments you’ve got already on your house or your commercial property. It does quite a lot of stuff, and we were sceptical about the value of some of it, so use this to validate it.,

So, how do you play this workshop? We’ve got these events, and the events are pretty specific. So let’s look at the first feature and walk through it. The first feature is that you can use it to find an assessor to get an inspection done. So you can go online and you can put in your postcode and say, I want somebody to come to my house to assess how energy efficient it is. And we thought about, well what’s the actual series of events that have to happen for us to achieve a goal based on that feature? And this is the most credible set that we came up with – and that is that I go to sell my house – that’s important because in this example that’s the point, we’re obliged to get one of these certificates. I know I need a certificate, I can’t find any local businesses that can do my certificate, I use the service to find an assessor and get it done. Because I got it done, the data is now more complete, and that leads to better policy.

And then we think about – well, how likely is that series of events? So going left to right, assuming everything to the left is true and has happened, how likely is the next event? And let’s just rate them out of five. So first up, I go to sell my house: How likely is that? Well, quite likely, people sell their houses all the time. Let’s give that a five. I know I need a certificate: So probably there’s a lot of people that don’t know that, but it’s easily googleable, you probably do some research before selling your house, take some advice. So I’ll give that a three out of five. I can’t find any local businesses that can do my certificate: Now this is the point where we thought, that’s actually not that likely, and the reason it’s not likely is because we just used Google or looked through Yellow Pages. That thing and this idea that that you wouldn’t find an assessor that other way is less likely. After that, you use the service to find an assessor and you get the assessment done: Well that feels quite likely. If you’ve got contact details of an energy assessor and you called them, show up and they’ll get it done. Not too complicated. And the data’s more complete, leading to better policy.

So yeah, I think we’ve got a strong hypothesis that open data around energy performance means that, for example, if you are a council trying to figure out what investments you should be making into the housing stock, you’ve got more data to do that. And we gave them all those ratings and ended up with this. So you know while some of the things are likely, there there’s this key event in the middle that has to happen, which is pretty unlikely, and that tells us something about the value of this particular piece of our work.

So then moving down to see suggested improvements to my house, the bottom row there. So this time round it was looking good right until the end, so this time around we had I go to sell my house again, pretty likely; an estate agent tells me I need a certificate – again yeah, pretty likely; I get the certificate done – yeah, if the estate agent says you legally have to do this, you’re probably gonna do it. I see I can save on energy bills if I get better insulation – So I’ve got this certificate now, just paid for it. It says if you get better insulation you can save on your energy bills. Again, it feels pretty likely, you just paid for it, you might well read it. But then, are you going to pay for that insulation? Well we thought that seemed less likely, because you know they’re about to sell their house, right? They’re not going to go tearing up the walls or replacing the windows or something after they’ve already listed an estate agent. So this hypothesis, not looking the most healthy, this last one, though actually one of the simpler things that we have on our backlog.

i go to sell my house; the advice says I might need an inspection; I use a service that helps me understand if need one; I arrange and get the inspection done, and the data is now more complete, leading into better policy. These are all likely events, and so this really simple feature we had in the legacy system is just finding out whether or not you need one. We learned that that’s actually one of the most important things we can tell our users.

And so, what do we do with that, and what’s that what’s that got to do with legacy? Well, the key results are… is that firstly you’ve got this striking output, right, and if you can do this exercise with your most senior stakeholders in the room – you need them there anyway because they’re often the ones that can tell you about the probability of some of these events – they might have spent a long time thinking about this particular service, this particular area of policy. It gives them this clearer view of what will actually happen if we neglect one of these edge-cases.

Think about that edge-case pipeline. Before you do this they’re all kind of equally weighted, and obviously if everything’s important then nothing’s important. It gives you this much better way to think about risk, as you’re doing your migration, doing your modernisation. You can be thinking about – well, what are the things that we truly cannot risk? And what are some of the things where you think – well that can be a learning exercise, or that could be something that we solved later if we need it at all. And without that kind of way of discussing with a more senior stakeholder, what really is risky then you’re starting off with that same 20 years of edge-case input to your new legacy system, and you’re going to have that same problem, that under-focus on simplicity.

So, what’s our second technique? Second technique is to think about platforms and make them into products. So if we’re in the same room, I’d say – who has a user journey that looks like this? – and everyone would put their hands up. But who has a user journey that looks like this, right, you’ve got a user on the left there and they’re trying to do something, and to solve their problem, each of these squares is a system, to solve their problem this is sort of what the interaction looks like. So they use one system and then there’s a giant web of handoffs between a bunch of other systems nobody really understands, and eventually they get their result back, and of course it’s actually usually worse than that, because there might be a landscape of teams on top of these systems as well. And, you know, some things are obvious here, so there’s clearly too many handovers happening here. But it’s worth having this bigger picture, because zooming out and modernising any one of these systems is going to be expensive regardless of what you do. But this structure itself is going to poison your modernisation efforts.

So think about team two there in the bottom right. They’ve got no idea who is using their stuff and why, and not only do they not have access to their user, they don’t have access to the team who has access to the user, so the chance of them being able to carry out a successful modernisation in that context is pretty slim. So what can you do? Clearly, join up teams where you can, but when you can’t do that, if you can’t let someone own the entire slice of solving a user’s problem, there needs to be some interface. Maybe the services in the bottom corner are used by another team or another user too for example. Then impose a product identity on the thing that they’re building. Call it a platform, and make it clear to this platform team that they’ve got a product and that the customers of that product are engineering teams, digital teams, and that there’s no guarantee that those teams will continue to use their product. So don’t let them talk like a service team, don’t let them talk like a cost-centre; Only let them interface with the organisation as if they were a product team, it just happens to be an internal product.

Some good ways to do this: So, the product in a box exercise is a really useful way of helping people get their heads around this stuff. So, literally, hand out some cardboard boxes and get people to draw on them, and say well if this platform was a product, how would you describe it? So start writing on the box: What does it do? What doesn’t it do? What circumstances might you use it? Why is it valuable to use that stuff? It’s worth doing this even if the product is really, really bad to start with. So even if they’re going to start out with the worst reviewed product in your whole organisation, it’s still worth doing this. It’s worth doing it because without that thinking in terms of – who is using this and what value are they getting from it and can we invest in that value? – then this kind of integration piece in the corner of your organisation is going to continue poisoning out the organisation efforts around it. So let that platform team exist, give them a strong product owner and let them start to deeply understand their customers, to deeply understand – well why this engineering team? Why haven’t you moved away from us? Why haven’t you gone and, I don’t know, bought something from a commodity cloud platform, or a piece of third-party software, or rolled your own thing. Really start to deeply understand that customer value.

And don’t compel teams to use these kind of platforms that you spin out, either. So if they really decide it’s going to be easier for us to implement something by moving to a competitor, like a commodity cloud platform or doing something themselves, and they do that, and obviously they’ve actually solved your problem for you! So that’s not an issue, but if they don’t, you should be able to see that product improving by understanding what are their needs and why do they do that. And I want to call out a government platform as a service here, because if you check out their published materials or if you’re working government, check them out on the Cross-Gov Slack. So I think they are, by some margin, the kind of Public Sector shiny example of a team whose customer is engineering teams but who are really nailing this idea of being a platform and having a product proposition that really shines through from all their documentation. If you’ve got a similar situation I recommend checking that out.

Thirdly, how do you get there? So let’s talk about Conway’s Law for a second.

So Conway’s Law is that the systems in an organisation will tend to follow the internal communication structure of that organisation. So you often end up with systems… the systems on the left. So your systems are communicating in this way, and then if you think about how your teams and how they communicate, you end up with a very similar or the same diagram. Worth trying to do this in your own organisation. It’s often surprising how well these things track each other. And you can see how it happens, right, because one department procures a system to help it do its job, then automation becomes more and more important, and so now suddenly they need to communicate with the system that another department bought. And you end up with this emergent pattern of your architecture and your computer network following your organisational network.

And this is something that I hear a lot when talking to people who decided to do some legacy modernisation, and they’re a bit worried about how to start. And that’s what they say: Well you know we’d love to have teams that solve a whole problem for our users, but our systems just aren’t set up like that. And it implies this kind of plan, right, that you’re going to fix your systems. First of all you’re going to upskill your teams or your people, and then you’re going to reorganize the teams to fit those new systems. But the problem is that step one is just never going to happen, because you’re asking your team to fight against Conway’s Law. I think it’s a bit like going on a really radical diet – so you know, maybe you can keep it up for a bit, but it takes so much energy, focus and determination, that to keep it up long-term – it’s gonna be hard to maintain for any significant length of time.

So here’s what you can do instead. Switch it around. This is the Inverse Conway Manoeuvre – so invest in the skills you need, maybe hire if you have to do that, but focus your reorganisational effort centrally, on shifting the teams around before it feels like they’re quite ready. And if this works right, then by Conway’s Law, what those teams are doing is building stuff that matches the structure that you wanted anyway. And the thing that you’re aiming for, which is solving a whole problem for your users, becomes an emergent property of your architecture, rather than something you have to be designing out from the middle. This works really well.

So I’ve seen this done in government a couple of times now, and I’d say, don’t have blind faith in it. So I’ve definitely seen some Agile coaches talking about as if this is gonna work like magic, and clearly that’s not the case. It’s definitely worth thinking about – are there any kind of small changes, that low-hanging fruits, you can make in advance, just to make everyone’s lives easier? Obviously it’s worth thinking about, do you have the leadership in place on your teams to do this? And it’s worth thinking about, how do we manage the expectation of stakeholders? You know, we’re going to cause a little bit of chaos temporarily, and that’s going to slow down velocity for a bit, but this is why it’s worth it overall. But, you know, with those caveats this does work – just allowing that to become an emergent property of your organisation.

And the last thing you want to talk about is – think big, start small. I think in government, we’re quite used to seeing everything as needing like a massive programme to do things right, and that comes from I think, aspects of how we fund technology. We’re stuck with, for example, I think a great way to have something funded is to be seen as the solution to a problem, and then suddenly you go from having no funding for your problem, for your solution, to lots and lots of funding, more than you can even stand. It really isn’t the case that that we’re able to perfectly line up when we want to do something and the effort we want to put into it. We find ourselves in these programmes – and if you do find yourself in that situation, definitely keep the ambitions massive. So you really do want to see something like 95% of your team effort going into value-add over routine maintenance, and you really do want to see something like outstanding feedback from your users and citizens. But you really really want to challenge how you’re going to measure that early on.

So, for example, let’s say that your overall goal is to reduce manual caseload in a system by 75% year on year. So the number of times a human has to come and intervene in the case versus they’re being done automatically. Well instead of that, can we start by asking how fast can we decrease this thing by five percent, or how fast can we decrease this by one percent, or even how fast can we decrease this by a single case? Could we do that in a day or two days? And the reason for this, is that your plan is wrong. So the plan that you came up with at the start of this large programme is not going to be the right plan to get you where you want to be. And you want to learn how it’s wrong as early as possible.

But on top of that, something that most people that who’ve worked in digital for a while notice, is that small positive effects aren’t linear – they compound. And I was talking to Dave Rogers – everyone knows him -mand he used this analogy of a freezer to me, where it’s like, if you’re defrosting the freezer, you start off by chipping off little bits of ice. There’s a bit coming off here and there, but then eventually just like the whole block of ice just falls down, as it starts melting and you’re chipping away as you go. And it’s a similar thing in tech. So if you’ve worked on a digital team, you’ve probably seen this, where your team is really struggling to get one thing over the line, but then as soon as it does that, all of the similar things just seem to be solving themselves. And that’s what I mean by those effects compounding.

It’s also really important not to underestimate the effect of your teams getting to experience a victory, to experience a win. Even in the best of scenarios, working on legacy can be a demoralising thing for your teams, and if you don’t have leaders who can tell the team – well amongst all the chaos that we’re causing, what are the important things that are going to remain constant? What’s our mission, and how do we connect to those important things? And if you find yourself having to be one of those leaders, one of the most powerful things you can do is let your team experience a win. To say – yes we’re working towards this two-year goal or whatever, but this is what we did today. This is our evidence to know that we’re on the right track.

So what’s the takeaway here? What am I saying? Am I saying that if you do these four things, like you run these four workshops, use these four techniques and only these four things, then every time you modernise you’re going to do a great job? No! If it was that easy, I would be out of a job. But these are techniques that are a small illustrative subset of what your team should be doing. At the same time as they’re thinking about that technology, your teams have to be deeply examining, challenging and overturning those legacy decisions. If you want in all of the spaces on the left, all of the levers to our modernisation that we have, please talk about the start. If you want to be building tech that’s going to be fundamentally different from the thing that you’re currently stuck with, that isn’t an optional step of the process. And by making sure that you split your team’s attention across those things, you’ve got a much better chance of really getting to grips with how we fix our legacy decision-making process, not just swap out some tools.

So thank you. I’m going to hand over back to Jack to start the Q & A.

Jack: Thank you very much, Tito. That’s perfect timing for our Q & A. We can start with our first question: “Where can we find more information on other useful workshops that we can run?”

Tito: Yeah, that’s a really good question. So my go-to is Twitter. I mean like, if you’re following, especially people like Simon Wardley on Twitter, and just the interactions he’s having often… you know you’re probably not gonna get like a pre-packaged here’s the steps to run this workshop – but what you will get is – here’s a way to look at your problem and categorise your problem – and you’re gonna have to do a bit of designing for your workshop around that. And I guess you should probably do that anyway, because nobody knows your team and your context better than you do. But Twitter’s definitely my go-to. Start with Simon Wardley, just look at the people tweeting at him and his replies to tweets, that’s a good start.

Jack: Lovely stuff. Next question: “Who should we invite to our hypothesis workshop? Would it be everyone in the department team up to senior stakeholders? Do technical teams need to be in these sessions?”

Tito: Yes, absolutely all of those people need to be involved. So this is about the hypothesis workshop specifically. So yeah, those people all absolutely need to be involved for a couple of reasons: Firstly because some of the probabilities, they know some of the probabilities. In terms of stuff like you could go left to right and build a hypothesis that if each of those things happened then it would meet your goal, but it won’t happen. So let’s say we replaced one of those with… in our works example I go to sell my house, I use the new AI system, I just tell it what I want and it does everything for me, and then the assessment gets done. Well yeah that would solve your problem but it’s not going to happen! So you need digital people in to give you that kind of realism, and you also need non-digital people in to think about – well, who are the people we’re talking about, and who currently uses this service, who complains, what are the big issues that I’ve had to deal with? All those things give them a feeling for that probability.

And the last reason why you want more senior people there too is because they’re the ones that need to accept this idea. You’re gonna have these tiers of risk across the things that you’re tackling, and who you want to build a shared risk profile with, to say – well, is it so bad if we risk this thing versus this thing? Having them there and experiencing just the impact of putting that together collaboratively, I think goes a long way to making that more realistic chat about risk.

Jack: Our next question is: “Any successful strategies for helping demonstrate this approach to leaders who tend to lean on technology and tool choices to fix problems, which needs to be addressed via team structure and strategy?”

Tito: That’s a really good question. So I suppose the one thing you can’t control is the quality of executive decision-making in your organisation, right? So it might well be the case that people just disagree with you. I don’t know in that case. There’s not a huge amount you can do in terms of if you wanted to persuade them. So if you wanted to say – hey, this is this is what you’re doing, these are the things that that are missing – I think talking about how things have gone wrong in your current system is usually quite powerful. So there’s a reason why people are willing to spend money on this thing. There’s a reason why suddenly we’re saying, how do we modernise this? Why is it they’re talking about that for, and those reasons themselves, if you can find a way to link that effect you’re trying to counter to some aspect of the system, that’s not just tech, well that’s quite a powerful argument. So let’s say a system went down and it got in the press, and people couldn’t do their jobs for a few days, or something like that. Can you point to other organisations who – yes they’re using this weird 20 year old technology but their systems don’t break every two months – or can you point to the direct cause of this thing? Like, why didn’t we catch that error in advance? Is that literally because of the technology, or is that because not one person or not one team has access to all the bits of data?

So I’d really try and work through those past examples, and you might even learn that you don’t need to fix a thing at all, maybe it’s actually good enough legacy, and that’s a nice thing to learn too.

Jack: Next question: “Do you have any recommendations for reading/resources on changing the mindset around legacy things? Changing the – I know it’s difficult but I’ve learned how to use it so why change?”

Tito: Yeah, so my recommendation here is a lot more abstract and it’s not really about tech at all, so sorry if it’s not what you’re looking for! My recommendation is a book called “The Birth of the Chaordic Age” – by Dee Hock, the founder of Visa. He just talks about, in the more abstract sense, what do you need to do to set up your organisation to exist, where the comms you need require this weird balance of chaos and order. And a lot of the way that I started thinking about organisations came from that book. But it doesn’t talk about tech specifically.

Jack: “What excites you most about avoiding the legacy trap? How have you seen this resonate on an emotional level with decision-makers and Agile teams?”

Tito: I think that most people want to solve problems for their users, and I think that’s why they stick around, especially in the Public Sector. And being able to say – not like –

Oh, what we’re doing at the moment is swapping out this kind of table for this kind of table – but being able to provide the leadership to say – No, what we’re doing is we’re giving ourselves control over this process so that over the next two years we can do this and this for our users that we couldn’t do before. I think that’s a pretty strong motivation for people who, especially we’ve chosen to work in public service.

Jack: Next question: “I like the definitions of legacy systems. Can you expand on that one on – any system without automated testing solutions?”

Tito: Yeah, so this comes from the book, “Working with Legacy Code” by Michael Feathers. He talks about, he opens the book by saying that’s his definition of Legacy, and he acknowledges that it’s probably a controversial one. But what he’s trying to get across is that that’s where your legacy is, in terms of code, because that’s the thing that isn’t, it’s not under control, so it’s not literal age we care about when we’re thinking about code. I could write a function today and it would be terrible, and people would probably say – oh that’s legacy – even if it was brand new, they might say that. Or there could be something written by someone better than me last year which is still perfectly accurate. I think what I’m trying to get across is that tests are the big factor there, because you can change it and you can iterate, you can control safety.

So yeah, I started with that, and I’ve heard a lot of people from the extreme programming community talk about that, but I really wanted to zoom out and think about systems and not just code.

Jack: “How do we escape the trap of vendors who have locked us into legacy services so that we can focus on taking the journey on our own terms?”

Tito: That’s a really good question. I suppose there’s a few things. There’s two sides to this question. So the first side is, if you’re talking about procuring a system, I think there it’s all about open data. So the thing that you’re procuring is the data that it generates. Is that in some open standard format, or is it some weird thing that only this system understands? And the same with the interfaces – so is it communicating with the rest of your estate in some open repeatable way that other things could pick up on in the future, or is it doing something wild that only it knows? So if you’re talking about procurement that’s of a system, I’d focus on those things.

If you’re talking about getting a vendor to come in and help you build software, then I think it’s reasonable to ask them to build this into the outcomes that they intend to sell you. So don’t just say – we want to contract this for, you know that we’ve got a Ruby system or the Postgres database or something like that. But talk to them about – well actually, can the outcome for this be that yes we’ve got this modern system, and it is run by civil servants or run by contractors or employees of the local authority or the local health board – or whatever your run model is. Build that right in, and say to them – this is what we want you know. Make your tech choices based on us being able to hire, and this is how we hire, I think most good vendors of custom software would be happy with that.

Jack: Lovely. I think we’ve got time for one last question: “How do we get people across our organisation agreeing on what we mean by the label ‘legacy’?”

Tito: Okay, so especially… so do you need them to agree on what we mean by the label, or just what is legacy, where is the legacy? And I think it’s probably a bit of both. So if you’re talking about – you’ve got some people, maybe in leadership positions, some people operational positions, some people in digital positions, and can you get them all to agree? I guess I wouldn’t be trying to focus so much on looking across the whole estate, and think about – well this thing, is it legacy or is it not legacy? – and maybe try and flip it a bit and think about – what’s our pressing problem? – and let’s call that… as long as we call that legacy, I guess we don’t really mind if the rest of the stuff gets in or not, right? So I try and think about it pragmatically in that way. And if you if you end up in a conversation with somebody about – is this legacy or is this not legacy? – I think, at that point, the label has out-served its usefulness. Instead, just switch the conversation. Say, like – okay, never mind, whatever, it might be legacy, it might not be. Can we talk about – is it easy to change, is it supported, is it expensive, all of those factors, does it look good in our CVs? All of those factors instead, that’s probably a more productive conversation.

Jack: Lovely stuff. Alright, well I think that’s about enough time for us to wrap up. I just want to open by saying a huge thank you for making time to speak to us all today. And I’m just gonna share my screen very quickly. And secondly, a huge thank you to all of you for making some time in your day to attend. We really appreciate it.

We’re gonna be sending everyone who attended the webinar today a feedback form. It takes about one minute to fill out and goes a really long way for helping us improve our topics and future events. We’d also like to announce our next webinar coming out the 26th of August, with our very own Senior Engineer in Bristol, Scott Edwards who’s going to be talking about how to upskill your technical teams effectively.

I’d also like to mention our new ebook launch, coming out tomorrow on our website. We’ll be sending out information to our attendees on how to get their own copy. That should be coming out on our website tomorrow.

If you want to learn a little bit more about what we’ve been talking about today, in terms of modernising legacy applications in the Public Sector, if you want to reach out to Tito about anything we’ve spoken about today and how you can apply it to your own business.

Socials are up on the screen, or if you just want to stay in touch and keep in the loop on all things Made Tech, our details are up on the screen. Please feel free to reach out to Tito – very active on Twitter and he’ll be keen to answer any of your questions if you didn’t get them answered during the Q & A.

And without much further ado, I’d like to say have a lovely afternoon and take care! Thank you very much and thank you Tito.

[recording ends]

Back to the episode