Made Tech Blog

Enough with the chatbots already

We’ve all been there. Stuck with a “helpful” online chatbot when all you want to do is speak to a real person. It’s the modern equivalent of the 90s Windows paperclip…

But while this seems to be an almost universal experience, chatbots are now popping up everywhere. From customer service and healthcare to banking and security – there’s a bot for that. How did we end up with such a disconnect between business expectation and user reality?

A chatbot, for chatbot’s sake 

The galloping excitement around AI means organisations are falling over themselves to add a chatbot as fast as possible. The only question that gets asked is “how long till it’s set up?”

After all, most organisations are battling with how to get customers the information they need at the right time and in the right way. A chatbot feels like a magic bullet, an easy fix. An AI assistant that never sleeps and has all the information at its (virtual) fingertips – hooray! In all the rush, the focus ends up on the platform itself instead of on our users.

Chatbots and AI

Chatbots have long suffered from being set up with little thought. The old scripted versions had a bad reputation for simply regurgitating corporate FAQs. People ran the chatbot gauntlet only in the (often vain) hope that a human was on the other side.

Modern chatbots don’t follow a predetermined script. Instead they’re based on a type of AI known as a Large Language Model (LLM). This allows them to process and predict what was previously an exclusively human to human interface: language.

As such they are a type of “generative” AI: AI that can generate its own original content. Most are “retrieval augmented”. That simply means they retrieve their answers from a set content source.

The addition of AI theoretically makes these modern bots much more powerful. But when they are poorly thought through and poorly implemented, it mostly makes them more powerfully irritating.

The impact of bad chatbots

We do not need to imagine the impact of poor chatbot implementation. Right now the media is full of delighted stories of people getting the better of bots.

One bot offered a great deal on a car. Another composed a haiku about how bad it was and we’re even starting to see court cases. But that’s just the tip of the proverbial iceberg.

In the face of so many bad experiences, users are increasingly exasperated and disappointed by bots. The impact of this is corrosive. When chatting to the team about chatbots, I found a common thread to our research insights that users increasingly:

  • avoid chatbots where they can or deal with them only reluctantly
  • start with the expectation of a bad experience – and get irritated all the faster when it inevitably is
  • are not certain of the accuracy of responses, so want to verify with a human anyway
  • worry about data security and privacy

Good design takes time

An AI chatbot is not separate from your content. It’s a part of your content. It cannot fix your content problems because it pulls what it “knows” from your content.

Think of a chatbot as a glorified search engine attached to a very fancy version of the autocomplete function you have on your phone. It does not understand the words it generates. It only understands how words relate to each other statistically. And that means that when you ask a question, your bot does not know the right answer. Rather it predicts the most likely answer. From your content.

I think you can see where I am going with this. If your content is a mess, the chances are that your chatbot won’t help. It may even make things worse. I’m not saying that chatbots are bad, or are never the right solution. But I am saying let’s be duly wary of “one size fits all” answers and quick fixes. Let’s take the time to properly map and fix our content problems because being irritating is bound to be bad for business.

As Dr Ralf Speth, CEO of Jaguar Land Rover, succinctly put it, “If you think good design is expensive, you should look at the cost of bad design”.

Creating a chatbot that’s user-centred

But there is stuff you can do. Here’s some considerations if you’re about to start a chatbot project.

An AI bot is a solution: what’s the problem?

It’s never a good idea to jump straight to a solution. What problem are you trying to solve? If your bot does not meet an existing user need, it will not add value for you or your users.

An AI exists to serve content 

This is all too easily forgotten in the rush to create a chatbot. Good content design is fundamental to implementing a product that meets user needs. With a bot the best content answers a specific need, is short, conversational and makes any next steps clear. Introducing a bot and making it responsive and pretty will do nothing if the content behind it doesn’t help users.

AI generated content needs human review

AI generated content may not be accurate or appropriate. Consider the risks to your organisation carefully. How will you know that your bot is delivering the right answers? How might you introduce human reviews? How will AI and user content be retained?

AI bots cannot solve problems caused by poor content design and governance 

If documents and data sources are poorly managed and maintained, AI will struggle to provide quality responses. If knowledge bases are not reliable, a bot will likely simply add to the confusion.

If you’d like to learn more about user-centred design and making sure you’re building the right thing in the right way, take a look at our services

About the Author

Helena Rix

Lead Content Designer

Helena has over a decade of content design experience across both public and private sectors. She has particular experience with introducing and building content design and user centred practice within an organisation.