The role user-centred design (UCD) should play in AI development
It’s been interesting watching and playing with AI tools as they emerge. Some feel really impactful. Others less so. And watching and experiencing the balancing act that teams are navigating. Between things potentially dangerous but generative and or transformational. And those that are more restrictive but less impactful.
Misleading content. Purposeful or not has a massive potential impact on the world. We’ve all seen the speed a lie gets around the world. What better than a lie that looks and feels like the truth.
Recently a conference organiser has faked some profiles of conference speakers using AI. To avoid criticism about gender disparity in their speaker lineup.
Although one day the threats and potential harm posed by AI might be new. The reality is most of them are here today. But we’re just not ready for the scale and speed that people can spread them.
You can also see how people have caught on to the hype of AI recently and it’s jammed everywhere.
Every feature is an AI powered thing. Chatbots have come back with a vengeance. AI has made little to no impact to the sheer number of terrible products and journeys being churned out. Though I suspect AI will soon be responsible for scaling up the number of terrible products out there.
But it has immense potential to make things great.
These 2 things. The potential risks and harms opened up by AI to us, users, customers and communities. And the ability to meet a need for users through AI. The solution to these 2 things aren’t new processes either. In fact, they’re ripe for high quality, agile user-centred design.
And I am talking both about the design of the underlying models and platforms. As well as the tools built on top of AI and those models and platforms.
Designing the underlying model, tools and platforms
Great models start with a clear vision and clarity over what you want it to do. Or make possible. UCD helps prototype and create these things with product and data teams. Exploring quickly whether a vision is worth bothering with. And before spending too much time trying to make it real. Researchers also help teams know their users and customers. And what opportunities there are to meet unmet needs.
For example, on a project I worked on last year. There was a data scientist who was jumping deep into an idea and how it could work in a different way. The clinical product owner wasn’t able to answer a question about process directly. The data scientist had a vision how it could work. But I was able to bring in some research and visualise how our users managed clinical cases. The vision from the engineer was smart. But it wouldn’t have worked for the way people managed cases. And it wasn’t a behaviour change that would have been easy to embed. The engineer had a good vision but didn’t work for there. Rather than waste some time finding out things after building a model we did something else. We used AI to get better data into our systems and reduce the time it took people to manage cases.
The quality, accuracy of data and facts in AI models is also critical. Teams building models and underlying tools will need to ensure they collect better data. That in part will include incentivising and making it easier to do that. Which are all better with user researchers and designers playing a part.
Teams developing the underlying models will also need richer and more in-depth data. And insights about people, how they work and act. This comes from data sure. But lots of it will need to come from observations of people. If you’ve ever done a study with lots of data you’ll also know that it often fails to tell you why. You might have a clear signal that no one is doing a thing. But why they’re not is often a case of contextual understanding that data doesn’t give you.
As new models emerge teams will need to understand their biases and implications. This means models will need to be coproduced and designed with communities. This will only be believable if it’s done with users and communities in an open and honest way.
The teams using these AI tools will also have their needs. We’ve already seen the rapid reaction of people to changes in models. And how things are less popular as they become safer. Or when certain counter measures create new unintended consequences that ruin the model. Teams and platforms that’ll succeed over the long term will ensure they design their tools with their users. And they’ll do so in quick, iterative loops. Avoiding making changes that don’t meet the model’s original vision or intent.
The key then is to bring together people who understand users with those who understand the tech. And using both skillsets to complement and bounce off each other in unique and novel ways. To make the models and tools useful. But also safe. And where things can’t be completely safe, a level of clarity and understanding of the biases and harms it could create.
Designing the products and features that sit on top of AI
This will be more comfortable clothes for many researchers and designers. The building of products and services to meet user needs. As with more traditional apps and websites, the best ones meet user needs. And do so supporting business goals and sustainability. The products that don’t make things better. That don’t meet the raised expectations of users who now know what AI could do, will be left long behind.
But as teams explore how AI will work to make services better, they’ll need to understand their users and customers more. Not less. It’ll be only thing that stands 1 apart from another. All powered by AIs of the same or similar flavours.
People with user centered design skills are uniquely placed to help teams do this. This is their skillset. Understanding people and how to shape and mould products to meet their needs. And as AI generates more tools to build the software and products themselves, everyone will need to cultivate these skills to succeed. The barrier to build will be lower. But the expectations will be higher.
Good research and design will also have a part to play to keep communities safe. The size and scale of impact that AI has. It gives us all super powers. But those super powers provide opportunities to those with good and bad intentions. So we’ll need nuanced designs and services that balance safety with usefulness. Designs that incentivise doing the right thing. And put barriers in the way of doing the wrong thing.
Things we as designers and researchers can do to be helpful
So UCD can make a massive impact on AI design and development. From model design and development, to building products on top of those models and platforms. To enable less bias and ensure communities are involved in the creation of AI models. But also to create safer products that really solve user needs and meet business goals.
To do this we need to be confident in the role we can play. And the value we can add. But we also need to use, understand and learn about AI more. Both as users and makers with it. Else we’re going to leave so much value behind because we won’t be able to help in practical and pragmatic ways.