Artificial intelligence (AI), as you no doubt already know, is an umbrella term for intelligent software and computing. For software to be deemed intelligent, it needs to be capable of things like reasoning, knowledge, planning, learning, communication and perception; in essence, it needs to be more human. As computers get ever faster, and we move another step closer to technological singularity, AI is becoming more and more commonplace and achievable, even within modest budgets. But as AI slowly takes over, how do people feel about this ‘new’ technology, and what impact is it having on user experience?
Don’t mention the war
In some research carried out on an app idea we’d developed, users were put off by AI, the app’s killer feature. ‘It learns about you’, they were told, ‘and based on that learning, it gives you great suggestions’. We were unlucky that we didn’t get to present the work ourselves, and perhaps we would have worded it rather more sensitively, but nonetheless, the users were turned off by the ‘learning’ part of the equation. They said that the thought of it learning about them was a bit creepy and it might put them off using the app. So, with that, another idea was confined to the shelf of history.
Of course, every day we use services that do learn about us, but for the most part we just don’t realise it, or if we do know, we don’t care because we’ve already been sold by the benefits. If we knew how other services we use learn about us, maybe we’d call them creepy too. But, as consumers, we don’t buy things for the way that they work, we buy products – products that we believe will improve our lives. To a user, Facebook is a space to share things with your friends, not a complex series of machine learning techniques that attempts to mimic how your brain works. It’s actually both things, my point being that the perceived value of your product is more important than how it works, or in simpler terms: it’s the sizzle, not the sausage.
We were all disappointed with our app idea being shelved, as we knew we’d developed a great concept, but wondered if the focus of the research had been on the wrong thing, the machine learning and not the benefits of using the app. For me, the big surprise was how sceptical people seemed to feel about knowingly letting artificial intelligence into their lives, and how they instinctively seemed to distrust it.
The shock of the new
It doesn’t help that our cultural expectations of artificial intelligence and intelligent computers are almost completely negative, from ‘Skynet’ becoming self-aware and ending the world in The Terminator, to HAL 9000 going haywire in 2001: A Space Odyssey. AI has a point to prove if it wants to stop being the go-to bad guy in science fiction films, and instead be seen as a force for good in the world – but I’m not sure that advertisers are helping the situation. Every day, the news is full of stories of advertisers coming up with more ways of automatically profiling people without their consent (a technique referred to as ambient personalisation). From bins that track people’s movements by their smartphones and supermarkets using automated facial recognition on their customers, to banner ads chasing us around the web, a debate is raging about how people can opt out of this endless profiling and regain their right to anonymity. The joke is often on marketeers who, despite often talking about clever use of ‘Big Data’, so often get it wrong and end up delivering adverts or recommendations of little or no real relevance to the consumer. By this delivery of minimal value through controversial means, the public are having their confidence in ‘intelligent’ technology ever further undermined.
Above all, we need to prove to consumers that AI is useful, and stop freaking them out with it. At World Usability Day last month, Oli Shaw (@olishaw) did a talk entitled ‘You don’t know me better than I know myself’, which focused on the negative aspects of ambient personalisation and machine learning. He talked a lot about an invisible line in the sand with ‘advice’ at one end and ‘parenting’ at the other. ‘Advice’ being useful and helpful, and ‘parenting’ being a frustrating experience where limits are set on what you can do. In order to deliver a good user experience, we need to keep on the ‘advice’ side of the equation and keep the user in control of what is happening. AI and ambient personalisation can provide a useful service, but this can easily stray into territory that feels more like a creepy stalker. Perhaps his most surprising example was how Target, a consumer profiling company, worked out that a teen girl was pregnant before her father did. Ironically, this rather unnerving story is a good example of an advertiser delivering targeted ads via AI, but to consumers this feels like the worst possible ‘Big Brother’ scenario.
Oli also went on to discuss the problem of living in a ‘filter bubble’ – the resultant state of an over-personalised web experience. By having our news feeds and search results constantly tailored to our preferences with AI, we can miss out on experiencing things outside our bubble. Paradoxically, one of the things I appreciate when flicking through traditional printed media like a newspaper is that I get exposed to things I’m not necessarily interested in or perhaps don’t even agree with; it helps give me a rounded view of the world. When the newspaper only shows us the things that we are already interested in, we miss out on discovering new things through our blinkered view.
A force for good?
If clever technology like AI is so scary, and doesn’t even guarantee to deliver value for the user, then why bother? Well, because like anything, it’s got the potential to be really useful if used in the right way. Like all good design, good AI doesn’t draw attention to itself, it just works, and when it works well it delights the user.
I recently got a new phone, and after setting it up I noticed something interesting. Handily, all the photos from my old phone were there, via an automated cloud backup, but some of them had a little magic wand icon on them. On closer inspection, these were all new photos, created from my existing images, but some had the exposure corrected and contrast tweaked, others had been merged into a panorama, arranged into a grid and some even turned into animated gifs. They were brilliant, and it was the perfect example of giving a little bit more to delight your customers. This magic had been done by ‘Auto Awesome‘, a feature that uses Google’s computer vision and machine learning technologies to automatically understand similarities in photos, decipher their context, and act on this understanding by creating new images that it thinks the user might like.
Other great examples of AI in action include the Facebook news feed which uses machine learning to trim the 1,500 updates that the average Facebook user could see down to the 30–60 updates deemed most appropriate to the user. Without this use of AI, Facebook would be a mess and would fail to deliver value to its users. Facebook edits what you see based on what it knows about you, but crucially, it also keeps the user in control by providing options to edit who or what you see in your news feed. Automatic facial recognition within Facebook photos makes common tasks, such as tagging friends, quicker and easier for users, again achieved using AI.
These examples show that AI can improve user experience, by responding to users’ needs and making time-consuming tasks quicker and easier. AI is all around us, we just don’t often realise it. It’s in speech recognition, translation, video games, social networks, search engines, and many other things that affect our daily lives. It’s even controlling the stock exchange, and helping airports run safely. In many cases, AI outperforms the human brain, as it’s less likely to make mistakes. Intelligent computing is only going to become more important and prevalent in our lives, so it’s important that we use it in the right way: to make sure it helps, rather than stifles us.
3 simple rules for great AI
Thinking about the morals of AI reminded me of The Three Laws of Robotics, devised by the science fiction author Isaac Asimov. Written in 1942, they are an early attempt at dealing with the moral implications of intelligent automated machines. The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Inspired by this, I’ve devised my own three rules of good AI design:
- Always get permission from the user: in order to build trust, data should be voluntarily given up by the user, otherwise it may feel like they are being watched or spied on. In order to gain permission from the user, you need to explain the benefits of them doing so. It should be a mutual contract between the user and the software. For example, if I provide Google Now with my location, it’ll show me cool stuff nearby. As people want more for less in their lives, the incentive should be in the benefits of your product.
- Always keep the user in control: allow your users to break out of their filter bubbles, keep them in control and offer them choice. Always remember to offer useful advice through your service, but don’t stray over the line into ‘parenting’ by patronising, dictating to, or obscuring information or options from the user. Sometimes even just the illusion of control might be enough.
- Always deliver value for the user: good AI delights by offering more in unexpected ways. Keep your side of the contract with the user by delivering real value for them through the data you collect. Enhance user experience on the fly by being aware of the user’s context and ever-changing needs.
As an industry, if we all design with these simple rules in mind, AI will cement its self as something that the public trust and love, rather than something they fear.
As a final thought, here’s a glimpse into the future. The image below shows what Google’s self-driving car can ‘see’, as it processes 1GB of data every second. Proof we are living in the future…