Transcript of "Discovery is done… When?"

ROBIN KNOWLES: Hi everybody, and welcome to our session, Discovery is done…When? I really like the title. How can teams ensure they are on track to deliver a discovery. I am Robin Knowles from Digital Leaders. I am chairing today’s session, so I am just going to do some introductory remarks and then I am going to get out of the way and let our speaker give you the content you have come for.

Running a discovery stage is a vital step in building a good service in the public sector, we know that. It helps ensure the team understand the problems faced, know how the users are and understand the context. This in turn helps make sure you are coming up with hypotheses and ideas to take into the alpha, which are solving the right problems.

However, despite the importance of a good discovery, lots of teams struggle with knowing what good looks like. Without a defined backlog and list of user stories, delivery and product managers often struggle to know whether a discovery is on track to deliver the right answers at the right time.

Our speaker today will talk through this process, with particular reference to some of the templates Made Tech have now adopted, and also demonstrate the impact this has had with some recent discoveries.

So, some housekeeping. Cameras and microphones are going to be muted, questions at the end and in the Q & A. You can put them in the chat if you must, but there is a Q & A button there to use. If you would like to put your questions in there, we will keep them until the end, until after our speaker has finished. And we are recording today’s presentation.

Our speaker today is Laura Burnett, Made Tech’s Head of Delivery, and a people-focused product specialist with a wealth of experience in building high quality products, managing global teams and driving positive organisational changes. Laura says she has a true passion and drive for delivering the highest level of value to the public sector. Delivering by inspiring high performing teams and guiding them to overcome complex challenges, all whilst being the best version of themselves.

Laura, I will jump out of the way, and I am looking forward to your presentation.

LAURA BURNETT: Thanks so much, Robin. Just a bit of my own housekeeping. One of my colleagues is sharing their screen on my behalf because we had a slight technical challenge, so forgive me if I say, “Next slide,” at some point.

Yes, as Robin said, I am Laura, I am Head of Delivery at Made Tech. I support our delivery community here to enable our teams working on deliveries across public sector organisations.

Previously, I spent many years working as a Delivery Consultant at Valtech, and I worked on large digital transformation projects for companies such as National Rail Enquiries, EasyJet and Just Eat. Next slide please.

My details are there, if anyone wants to connect with me, and will also be at the end.

Made Tech are public sector technology delivery experts. We only work with the public sector, so we have a wealth of experience all the way through the GDS service, service standards and projects. We work with many organisations across the public sector, including projects in health, local government, central government, and defence.

Today I will cover five key areas. Firstly, an explanation about what a GDS discovery is. Secondly, some of the challenges I have observed when running discoveries. Thirdly, how an effective kick-off can set up a discovery for success. Then, how we have used ‘Discovery is done…when?’ objectives to run a discovery. Finally, the roles and responsibilities that you might see in a discovery team.

First; what is meant by a GDS discovery? The Government Digital Service or GDS was started after a lady called Martha Lane Fox, the founder of lastminute.com, wrote a letter to the then Minister for the Cabinet Office, Francis Maude. In it, she covered for ‘revolution not evolution,’ following her review of direct.gov, where she identified that the government needed to consider digital as an enabler for public services. This letter set things in motion and resulted in the creation of GDS.

The methodology that came about from this change took a lot of influence from the lean start-up released by Eric Rice in 2008. It also took influence from the double diamond released in 2004, that captured a process for defining and then working on problems.

The GDS founders took the lean start-up, took the double diamond, exploded it together and left us with these four stages. These posters by Elliott Hill explain it really nicely but today, I will just be focusing on the first stage, discoveries or exploring the problem space.

There is a bit of a potential over-simplification on the next slide, but discoveries are all about asking ourselves what we should do first. Too often, I see teams phrase discoveries as, let’s explore the problem phase or let’s find out more about user needs. Definitely, that is part of it, but the other bit is really about understanding what the priorities are, and what should we do first. If we don’t uncover this, we will come up against a lot of problems.

Then alphas are all about is the service usable. Too many focus on can we make it usable, but really, this is what we should be doing in beta. In an alpha, as quick and as cheaply as possible, we want to work out whether we can solve the problem in a viable and impactful way. Betas are more about making the service usable by everyone and ensuring that it is accessible in lots of ways.

Finally, the live phase. Is the service scalable? Can the whole population use it?

This whole staged process is all about reducing risk, and in real terms, saving taxpayers money whilst building services that meet the needs of citizens.

So, why focus on discoveries? Well, in my experience, they tend to be the type of project that go wrong the most frequently. Some of the reasons for this include the short, six-to-eight-week timescale that most discoveries run in. It leaves little room to fix issues if they arise. For example, if user research doesn’t start for three weeks due to recruitment issues, you risk being halfway through the project before you can use research findings to inform your next focus and the overall problem statements.

Being told to explore the problem space can be pretty daunting. It often feels like a really big task, and it can be a challenge to know what to prioritise first. Sometimes, this can lead to delays or a slow ramp-up, further exacerbating the risk of the short time frame.

Unlike a digital service, where it’s often easy to see when an item is completed, for example, a new form on a webservice, the outputs for discovery are often a lot more intangible. Deliverables such as service blueprints and personas might be created, but these outputs are only valuable if they communicate information in a way that supports decisions.

This intangibility, coupled with the generative nature of a discovery where we prioritise what to do next based on what we have already learned, can make it hard to assess whether the team is on track. I’ve seen discoveries where the team, including the service owner and the product owner, have been really confident about their progress throughout the project. That is, they are confident up until the final week of the discovery, where they suddenly realise they haven’t answered crucial questions that they need answers to in order to move forward successfully.

An opaqueness on progress and a lack of clarity over expectations can both lead to stakeholders being disappointed with the outputs, particularly where the product manager or service owner was unable to be fully dedicated to the discovery.

With all of these potential pitfalls, what can we do to maximise the chance of success?

The most important thing we can do is to set and agree an intention. As this diagram from Liz and Molly highlights well, success comes from taking action towards defined goals. Although some of the intention is likely to be defined in the business case procurement listing or statement of work, this often leaves room for different interpretations of what good looks like. Therefore, an effective kick-off is imperative for building alignment in the team, to help make sure everyone is clear on the expected outcomes.

Will Myddelton wrote a series of blog posts about discoveries that are very insightful, and well worth a read. In his article, ‘Setting up a Discovery to Succeed with a Small Team,’ he talks about the importance of choosing the right. We have taken a lot of Will’s insights and recommendations and built upon them to create a playbook for discoveries, which has proven successful in discoveries for NHS, Camden and DLUHC.

When designing a discovery kick-off, we prefer dedicating a whole day to it, and where possible recommend that all team members and key stakeholders attend, clearing their diary as much as they can. If feasible, in-person sessions can be particularly effective, but Google Meet or Teams and a virtual whiteboard such as Miro or Mural can work almost as well.

We try to keep to four sessions of sixty to ninety minutes each, to maximise the amount of downtime people have between meetings, in order to absorb information, spend time getting to know each other, drink coffee, read emails and Slack, so that people can remain focused within the workshops.

During these sessions, we run workshops for team building, agree ways of working and review existing research and background. One of the most important sessions for a discovery is the vision and purpose setting workshop.

During this workshop we ask ourselves a series of questions based on those defined by Will Myddelton, and work through these as a team, to help build a unified picture of what good looks like.

First, goals. What are the things we want to achieve as a team? Second, assumptions. What assumptions have we already made? What do we believe we know already? Third, uncovers. What are the things that we wish we knew, but we don’t? Rabbit holes, what things should we avoid focusing on because it’s likely to be a waste of time or lead us down a dark alley.

Outputs. What outputs are we expecting to deliver? Do our stakeholders have any firm expectations about the format that they will receive findings in?

We use the answers to the previous questions, and group them together to identify themes. These themes are then converted into a set of ‘Discovery is done…. when?’ objectives. A list of questions that when we have the answers to them, we know that we have completed the discovery.

For example, in the DLUK discovery, one of the agreed outcomes was, ‘Discovery is done when we know how to define and measure the benefits.’ A tip is to ensure that the service owner, or product owner if there is no service owner, should have the deciding vote but all team members should be able to contribute to the discussion.

I have added some links here to the Mural and Miro templates that we used for this workshop, so that you can try them yourselves. I assume that they will be circulated afterwards.

So, once this set of ‘Discovery is done…when?’ objectives have been defined during the kick-off, we use them to form the backbone of the whole discovery. They can be used throughout the discovery including in planning, to monitor progress and to structure the findings at the end.

They help form a steel thread on which to hang activity planning throughout the discovery.

Discovery is a process. Teams move together through understanding the problem, to identifying the opportunities, to proposing a delivery approach. They open up their thinking before focusing in on an approach. For this reason, discovery is sometimes pictured as a diamond shape. In the first half of the diamond, in the learning phase, we ask questions such as who for, and why?

In the second half, we are focused on understanding the opportunity. Here we ask questions about what, and when? And we focus on answering what we do next. Considering this diamond can help sequence the ‘Discovery is done…when?’ objectives, by overlaying the three questions onto the three phases of the diamond.

These sequenced questions enable us to put together a high-level discovery plan aligning questions to iterations. Generally, for a discovery, we recommend keeping iterations short, and we will often use one-week sprints. This allows us to iterate our planning and priorities regularly based on what we’ve learned, whilst maximising the opportunity to share findings and adapt our approaches in reviews and retrospectives.

This approach helps to mitigate the risks that we discussed earlier, associated with the short time frames of discoveries. By producing a high-level roadmap like this, we can align our objectives to the iterations and ensure that our scope and expectations are realistic, and that we have a visual representation of roughly where we think we should be during the discovery.
This roadmap and these questions can also support sprint planning. By making our sprint goal one or more of the agreed objectives, the whole team can self-organise their work to meet this goal.

Having a plan and alignment on our expected outcomes can enable the team better visibility of their progress, and allow them to inspect whether they are on track to answer the agreed ‘Discovery is done when…?’ objectives, within the expected timeframe.

Our team introduced a weekly check-in, where all team members anonymously assigned a confidence level to each discovery objective. This confidence level ranged from, ‘we don’t think we will be able to answer this,’ right through to ‘we’ve got enough information for the final report.’ Once the whole team had voted, we reviewed scores and embarked on a discussion about any outliers or differences.

This data often fed into our retrospective and sprint planning. When we reached level 4 or level 5, we would then agree to start documenting the findings to ensure that the write up and communication of our learnings wasn’t left until the very end of the project.

We plotted the total scores for each question over time, which gave us a visual representation of our progress towards our objectives. This helped to visualise any risks or issues, for example, slow progress on a question, and provided an opportunity to have conversations about support that was needed to unblock the team.

It also helped to facilitate conversations about reprioritisation if required.

I’ve also included links to the templates used for these, both in Miro and Mural.

Answering these questions is not really of much use, if you don’t effectively communicate the findings to stakeholders and decision makers. We have found that using these ‘Discovery is done…when?’ objectives provide a common language for the whole team and all stakeholders to keep referring back to.

For example, during show and tells, they can be structured around the objective that you have been prioritising. It is also helpful to share the progress over time scores, and help stakeholders understand how the team is progressing, so that they can facilitate the unblocking of the team if needed.

In the final report, recommendations can similarly be structured around these objectives, helping set out clearly the answers to the questions we set out to discover in the beginning.

Finally, I’m going to cover the roles and responsibilities that make up a discovery team.

We recommend there should be no more than seven people in a discovery, and sometimes a smaller team of three to five can be more efficient in building a shared understanding. The more people that are in a team, the more time there will be spent in meetings and learning playbacks, to build the shared context that is required to make recommendations and next steps. Therefore, a conscious decision should be made over balancing the need to have the right expertise in the team, with the need to keep the team lean for agility and context.

Within a discovery, we coach team members to step outside their normal capability and area of expertise, working as a team to ensure the discovery questions are answered. The role of the capability specialist is to lead certain responsibilities and ensure that best practices are adhered to. It doesn’t mean that they carry out all of the activities in their specialism, alone.
For example, we say that user research is a team sport. Normally, a user research specialist will lead research sessions, and ensure appropriate safeguarding and research best practices are adhered to. Any team member can support notetaking, helping with research opportunities and research, recruitment coordination and scheduling, and research analysis workshops.

So, how can each capability support the discovery?

Delivery Managers’ primary responsibility is to ensure that stakeholders are actively engaged in the discovery process, as without this involvement, teams often face blockers, or risk having their final findings rejected by key decision makers.

Therefore, we advocate for sharing findings and progress regularly, ideally weekly, with all key decision makers, to help build a shared understanding of the findings that went into the final report.

Delivery Managers are responsible for setting up the team for success by facilitating the initial kick-off, helping the team manage progress against the ‘Discovery is done…when?’ questions, ensuring risks and issues are identified and mitigated against quickly, to enable the team to deliver at pace. And finally, keeping one foot into alpha. For example, considering a high-level project plan, identifying any milestones or deadlines, and understanding budget constraint.

User researchers are key members of the discovery team. They are responsible for helping the team develop an understanding for the users and stakeholders involved in the problem space. They may take ownership of ‘Discovery is done…when?’ questions or themes such as, ‘We understand types of users including end users and service teams and know about the way they currently act with the system and the problems they face.’ Or ‘We understand the wider context for users, and how this service supports them.’

Designers in discovery are responsible for building a picture of the current landscape. For example, helping visualise the process and pain points by mapping user journeys and clarifying the system-wide context using system maps. They can support at team with framing the problem based on evidence, and uncovering opportunities to solve these problems.

For example, facilitating ‘how might we…?’ workshops, opportunity mapping workshops and identifying how other departments have solved similar problems in the past.

Software Engineers are responsible for understanding the technical strings both of the existing problem space, and also of the potential hypotheses of the team. They can help answer questions about system interactions, data flows and security considerations. Their involvement is particularly important where legacy technology already exists. If part of the discovery is looking to answer whether a system should be replaced, and if so, how.

The product person, who could be a product manager, a service owner, or sometimes a delivery manager, sets the direction for the team, and has the final say on the ‘Discovery is done…when?’ questions, and has influence over where the team should focus on a weekly basis.

They are responsible for ensuring the team have a group hypotheses for how the problem identified can be solved, and consider how they will test these hypotheses before moving into alpha.

Subject matter experts are people with a deep understanding of the problem space and the people in it. They can help guide a team in where to prioritise and focus efforts. For example, recommending people to speak to or suggesting existing systems to explore. It is useful to keep a subject matter expert closely embedded into the team, but you also need to be mindful of not letting pre-existing biases influence the team.

For a subject matter expert, it is important to keep an open mind and consider all options. A great way to cultivate this mindset is to take part in user research sessions and team workshops.

Thank you all for your time today. Key takeaways from the session.

A well planned and effective kick-off is crucial for team alignment. Agreeing a set of ‘Discovery is done…when?’ objectives provide a team with a common language and vision for what good looks like.

These objectives provide a steel thread for project and sprint planning. Having a team self-assess confidence in answering these objectives provides a tool to quantify progress. This enables earlier identification of potential problems and blockers.
These objectives can provide a useful way to structure findings and playbacks, and also support you in deciding who you need in the team, and what skillsets you need.

Thanks. That is everything from me, so, Robin, back over to you.

ROBIN KNOWLES: Brilliant, thanks Laura, great presentation, thank you very much. We’ve got some questions. Going back to your fabulous slide about what I can only assume was the Made Tech office dog, hints, and tips on not falling down a rabbit hole. The question is, ‘What are your favourite hints and tips for avoiding rabbit holes?’

LAURA BURNETT: I think that is a difficult question, and a very good one. It is sometimes hard to know what a rabbit hole is. I find one of the most interesting parts of the conversations you can have during that workshop is if a subject matter expert, for example, says “Don’t go and investigate this thing, that is a rabbit hole and we don’t want you to look at that thing.”, you can really start to interrogate them and have a conversation about why. Because if we reflect back on that slide where I talked about pre-existing biases, it might actually be that that is exactly where you should be focusing. So, having that conversation up front is useful.

You have to have that discussion as a team, and make that agreement together. Examples of rabbit holes that we have fallen into in the past are spending a lot of time looking in great detail at an existing system, when actually, the technology behind the existing system when the user research then says it’s so broken that almost starting again from scratch, or considering the blue sky thinking for what the user journey looks like would be a better approach, rather than understanding the existing system. That can be an interesting one to make a decision over.

Do we want to understand what it is already doing and factor that into our feedback or actually, is it perceived to be so broken that if you started again from first principles and built something, how similar or how dissimilar would it look? That can help prioritise and order things.

ROBIN KNOWLES: Brilliant, thank you. “What are the benefits of writing a report as the outcome of the discovery, instead of a set of different documents, process maps, requirements, benefits, risks etc? If you do create a report, who is the main audience it is written for?”

LAURA BURNETT: It’s worth saying that because I work at Made Tech as a third party that provides services to the public sector, a final report or deliverable is often one of the things that is expected from us. I do appreciate that it might be different if you are running something internally.

That being said, it would definitely link off to things such as service blueprints or stakeholder maps, or any of those other things that are determined useful for answering some of the questions. I think what it does help is to pull out some of the key information. If you just hand over a service blueprint or a system map or something like that, they can be quite complicated. If you are not initiated in what these things mean, quite difficult and challenging to interpret.

So, actually having something that pulls out what the key recommendations and findings are and then links off to the evidence, helps tell a more compelling story.

ROBIN KNOWLES: Brilliant. Picking up on the roles you talked through, you didn’t mention the BA role. “How do you see the BA role fitting in to the team?”

LAURA BURNETT: that was just a flavour. I uhmmed and ahhed about Business Analyst, Content Designer. There are loads of different roles that could be needed. I had an interesting conversation with one of our user research practitioners today. They were saying that they feel sometimes we overuse user researchers for things that BAs are better suited to. And that BAs can really help with a lot of the user needs.

Similarly, they could look at some of the requirements that an engineer might be looking at. So, it depends. The other thing that a BA can be really helpful or useful with looking at, is how we measure success. What is already there? How do we convert those pain points and user needs into what a measurement framework could look like, for example, or a performance framework. So that when we move into alpha, we know if a hypothesis is successful or not.

So yes, I think definitely a BA can be useful. I think it’s just a case of looking holistically at the team, and working out what skills you need, and whether people have them within the team. Rather than assuming you need one role per skillset.

ROBIN KNOWLES: Brilliant. I’ve got a question about – I know you have worked in the private sector in the past. I just wonder what the main differences are between the private sector and the public sector that you’ve found in terms of how they treat discovery. How do they value discovery, any insights on that?

LAURA BURNETT: Yes. They are completely different or at least, in my experience they are completely different. Most of the time when a private sector company asked Valtech for a discovery, it was, “We want a cost, we are moving into a fixed price engagement.” Or “We want a much better understanding of what the total cost will be.” It’s often more about moving slightly towards a waterfall process of defining the requirements up front, coming out with a large list of user stories, having all of your epics broken down. Whereas that is not what you would be looking for in a discovery in the public sector.

Similarly, most of the discoveries I worked on in private sector had an NVP within them as well. So, you would perhaps do some of the things that you would do in an alpha in the public sector in a discovery in the private sector. It’s often not a three-stage process, it is either one stage or two stages in the private sector.

ROBIN KNOWLES: You mentioned Martha Lane Fox coming in, in – gosh, my history isn’t brilliant, I’m going to say 2010, please correct me in the chat. This must have been going on for a while now, in the public sector. I wonder whether you are coming across doing discoveries on things that had their own discovery originally, or even two discoveries. Is that relevant, or do you just ignore the fact that somebody went through a discovery when they designed it the first time, and just say no, discovery is about where we are now?

LAURA BURNETT: Yes, I think definitely. The other example we see of that is in local government, where a council could be doing a discovery that another council just down the road has already done. One of the things that we would be looking at is can we share information; can we pull out things that have already been learned? Either through previous discoveries, in the example you just gave, or sharing learnings across other public sector organisations. Then you can really home in on let’s just test whether these things are still valid, rather than starting again from scratch.

I think that can really help keep things as lean as possible. Focus on the biggest and hairiest problems rather than starting again from scratch.

ROBIN KNOWLES: Yes. By the way, Laura, do have a look in the chat because people are saying nice things, and they are thanking you for some of the answers you are giving, so don’t miss that as we go through.

I suppose the last question is, there are always going to be people who maybe don’t get digital transformation in its purest sense, who are going to say this feels like an expensive first step, rather than spending money on this discovery project. Time, people…we’ve got a pretty good idea of what we want, let’s just get on with it. Do you think they are being held back by GDS guidance and things like that? Or do you think there is buy-in now, that people get this?

LAURA BURNETT: I think that’s almost a couple of questions in there. Sometimes it might be appropriate to get on and start something and consider almost more of the lean start-up approach of build, test, learn. By looking at hypotheses as you go, doing small increments and treating it as an NVP that you are building on.

An example of that was the Homes for Ukraine service that we worked on with DLUK. We had to turn that around so quickly, there was no opportunity to do a discovery. Now there are flaws and things we wish we had done differently but you obviously have to balance the need for speed with that upfront work. It was very much about building something, testing something, retrofitting the user research and then adapting as we went.

However, I think the overall rationale for discovery is very much about saving money in the long-term. Because you are trying to avoid solving the wrong problem or building a service that nobody uses or doesn’t need.

In the long-term, that’s obviously the theory behind it. I’ve definitely seen that in the private sector. Where a high paid person says build this thing, I’ve decided it’s a good idea, but then actually, you build it, nobody uses it, and it was a massive waste of money. If you had done just a little bit of testing upfront, perhaps you would have saved a lot there.

ROBIN KNOWLES: Just picking up on that example of Homes for Ukraine, do you think there might be a discovery-lite out there, that may win favour when you are up against it?

LAURA BURNETT: Yes. We had to turn that service around in four days. So, it was a case of, there are three really risky assumptions here. Three really big things that are a challenge. So, we’re basically working in one day sprints. Do some user research about this thing. Go and speak to two or three people, let’s uncover what the challenges are there, and adapt as we are going. So, that probably was your discovery-lite in that sense.

Now we are actually just starting a discovery for that service and thinking about longer-term. If you take it up a level, out of a Homes for Ukraine service, and instead think about how the government responds to humanitarian crises, rather than having to respond in a four-day window, instead having something that can be redeployed in the future to avoid that short-term reaction.

ROBIN KNOWLES: Brilliant. Our time together is up, Laura. It’s been really good. I really enjoyed the presentation; it was really comprehensive. Thank you to our audience for all of their questions and the positive comments about Laura’s presentation in the chat. It’s always good to see that you are enjoying it.

We will bring it to a close. Thank you, Laura, that was fantastic. The recording and the slides will be available as part of Innovation Week, so do follow that. If you have registered today, which obviously you have if you are part of today’s meeting, you will receive a copy of that as well.
Finally, Laura, thank you so much.

LAURA BURNETT: Thank you, it was good fun.

~ End of Recording ~

Back to the episode