About this siteThis is the blog and portfolio site of James Reece, Bristol UX designer. I blog about UX, interaction design, bikes and music. Learn more >
- September 2017
- August 2017
- July 2017
- February 2017
- December 2016
- October 2016
- August 2016
- January 2016
- December 2015
- November 2015
- August 2014
- June 2014
- May 2014
- January 2014
- October 2013
- July 2013
- June 2013
- April 2013
- March 2013
- February 2013
- January 2013
- May 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- June 2011
- May 2011
- February 2011
- January 2011
I’m excited to be speaking at the IPIA Conference tomorrow, where I’ll be talking about Natural User Interfaces and touchless interaction.
Artificial intelligence (AI), as you no doubt already know, is an umbrella term for intelligent software and computing. For software to be deemed intelligent, it needs to be capable of things like reasoning, knowledge, planning, learning, communication and perception; in essence, it needs to be more human. As computers get ever faster, and we move another step closer to technological singularity, AI is becoming more and more commonplace and achievable, even within modest budgets. But as AI slowly takes over, how do people feel about this ‘new’ technology, and what impact is it having on user experience?
Don’t mention the war
In some research carried out on an app idea we’d developed, users were put off by AI, the app’s killer feature. ‘It learns about you’, they were told, ‘and based on that learning, it gives you great suggestions’. We were unlucky that we didn’t get to present the work ourselves, and perhaps we would have worded it rather more sensitively, but nonetheless, the users were turned off by the ‘learning’ part of the equation. They said that the thought of it learning about them was a bit creepy and it might put them off using the app. So, with that, another idea was confined to the shelf of history.
Of course, every day we use services that do learn about us, but for the most part we just don’t realise it, or if we do know, we don’t care because we’ve already been sold by the benefits. If we knew how other services we use learn about us, maybe we’d call them creepy too. But, as consumers, we don’t buy things for the way that they work, we buy products – products that we believe will improve our lives. To a user, Facebook is a space to share things with your friends, not a complex series of machine learning techniques that attempts to mimic how your brain works. It’s actually both things, my point being that the perceived value of your product is more important than how it works, or in simpler terms: it’s the sizzle, not the sausage.
We were all disappointed with our app idea being shelved, as we knew we’d developed a great concept, but wondered if the focus of the research had been on the wrong thing, the machine learning and not the benefits of using the app. For me, the big surprise was how sceptical people seemed to feel about knowingly letting artificial intelligence into their lives, and how they instinctively seemed to distrust it.
The shock of the new
It doesn’t help that our cultural expectations of artificial intelligence and intelligent computers are almost completely negative, from ‘Skynet’ becoming self-aware and ending the world in The Terminator, to HAL 9000 going haywire in 2001: A Space Odyssey. AI has a point to prove if it wants to stop being the go-to bad guy in science fiction films, and instead be seen as a force for good in the world – but I’m not sure that advertisers are helping the situation. Every day, the news is full of stories of advertisers coming up with more ways of automatically profiling people without their consent (a technique referred to as ambient personalisation). From bins that track people’s movements by their smartphones and supermarkets using automated facial recognition on their customers, to banner ads chasing us around the web, a debate is raging about how people can opt out of this endless profiling and regain their right to anonymity. The joke is often on marketeers who, despite often talking about clever use of ‘Big Data’, so often get it wrong and end up delivering adverts or recommendations of little or no real relevance to the consumer. By this delivery of minimal value through controversial means, the public are having their confidence in ‘intelligent’ technology ever further undermined.
Above all, we need to prove to consumers that AI is useful, and stop freaking them out with it. At World Usability Day last month, Oli Shaw (@olishaw) did a talk entitled ‘You don’t know me better than I know myself’, which focused on the negative aspects of ambient personalisation and machine learning. He talked a lot about an invisible line in the sand with ‘advice’ at one end and ‘parenting’ at the other. ‘Advice’ being useful and helpful, and ‘parenting’ being a frustrating experience where limits are set on what you can do. In order to deliver a good user experience, we need to keep on the ‘advice’ side of the equation and keep the user in control of what is happening. AI and ambient personalisation can provide a useful service, but this can easily stray into territory that feels more like a creepy stalker. Perhaps his most surprising example was how Target, a consumer profiling company, worked out that a teen girl was pregnant before her father did. Ironically, this rather unnerving story is a good example of an advertiser delivering targeted ads via AI, but to consumers this feels like the worst possible ‘Big Brother’ scenario.
Oli also went on to discuss the problem of living in a ‘filter bubble’ – the resultant state of an over-personalised web experience. By having our news feeds and search results constantly tailored to our preferences with AI, we can miss out on experiencing things outside our bubble. Paradoxically, one of the things I appreciate when flicking through traditional printed media like a newspaper is that I get exposed to things I’m not necessarily interested in or perhaps don’t even agree with; it helps give me a rounded view of the world. When the newspaper only shows us the things that we are already interested in, we miss out on discovering new things through our blinkered view.
A force for good?
If clever technology like AI is so scary, and doesn’t even guarantee to deliver value for the user, then why bother? Well, because like anything, it’s got the potential to be really useful if used in the right way. Like all good design, good AI doesn’t draw attention to itself, it just works, and when it works well it delights the user.
I recently got a new phone, and after setting it up I noticed something interesting. Handily, all the photos from my old phone were there, via an automated cloud backup, but some of them had a little magic wand icon on them. On closer inspection, these were all new photos, created from my existing images, but some had the exposure corrected and contrast tweaked, others had been merged into a panorama, arranged into a grid and some even turned into animated gifs. They were brilliant, and it was the perfect example of giving a little bit more to delight your customers. This magic had been done by ‘Auto Awesome‘, a feature that uses Google’s computer vision and machine learning technologies to automatically understand similarities in photos, decipher their context, and act on this understanding by creating new images that it thinks the user might like.
Other great examples of AI in action include the Facebook news feed which uses machine learning to trim the 1,500 updates that the average Facebook user could see down to the 30–60 updates deemed most appropriate to the user. Without this use of AI, Facebook would be a mess and would fail to deliver value to its users. Facebook edits what you see based on what it knows about you, but crucially, it also keeps the user in control by providing options to edit who or what you see in your news feed. Automatic facial recognition within Facebook photos makes common tasks, such as tagging friends, quicker and easier for users, again achieved using AI.
These examples show that AI can improve user experience, by responding to users’ needs and making time-consuming tasks quicker and easier. AI is all around us, we just don’t often realise it. It’s in speech recognition, translation, video games, social networks, search engines, and many other things that affect our daily lives. It’s even controlling the stock exchange, and helping airports run safely. In many cases, AI outperforms the human brain, as it’s less likely to make mistakes. Intelligent computing is only going to become more important and prevalent in our lives, so it’s important that we use it in the right way: to make sure it helps, rather than stifles us.
3 simple rules for great AI
Thinking about the morals of AI reminded me of The Three Laws of Robotics, devised by the science fiction author Isaac Asimov. Written in 1942, they are an early attempt at dealing with the moral implications of intelligent automated machines. The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Inspired by this, I’ve devised my own three rules of good AI design:
- Always get permission from the user: in order to build trust, data should be voluntarily given up by the user, otherwise it may feel like they are being watched or spied on. In order to gain permission from the user, you need to explain the benefits of them doing so. It should be a mutual contract between the user and the software. For example, if I provide Google Now with my location, it’ll show me cool stuff nearby. As people want more for less in their lives, the incentive should be in the benefits of your product.
- Always keep the user in control: allow your users to break out of their filter bubbles, keep them in control and offer them choice. Always remember to offer useful advice through your service, but don’t stray over the line into ‘parenting’ by patronising, dictating to, or obscuring information or options from the user. Sometimes even just the illusion of control might be enough.
- Always deliver value for the user: good AI delights by offering more in unexpected ways. Keep your side of the contract with the user by delivering real value for them through the data you collect. Enhance user experience on the fly by being aware of the user’s context and ever-changing needs.
As an industry, if we all design with these simple rules in mind, AI will cement its self as something that the public trust and love, rather than something they fear.
As a final thought, here’s a glimpse into the future. The image below shows what Google’s self-driving car can ‘see’, as it processes 1GB of data every second. Proof we are living in the future…
It seems like just the other day we were discussing the move away from the mouse to the touchscreen, but such is the current pace of technological change and innovation, that now the talk is of a ‘post touch screen’ world. New technologies are pushing the boundaries of what is possible without touching or clicking an interface – this gives us the opportunity to create more natural-feeling interactions than ever before, in scenarios that wouldn’t have previously been possible.
This blog post outlines some of the main technologies and interactions on offer, and how we can all benefit from them today.
From GUI to NUI
Since the dawn of computing we have had to adapt our natural human behaviour in order to interact with computers. Hunching over and poking at buttons on our glowing mobile and desktop computer screens, interacting with a graphical user interface (GUI) has become synonymous with modern life.
But the GUI is a workaround, albeit a very clever one, that enables us to control a computing device. It’s a compromise between how a computing system works and what a human can easily understand; by abstracting the inner workings of a system into a set of buttons and other controls, we can command it to do what we want it to do. As much as interaction designers like myself try to ease the pain by designing user-friendly GUIs, controlling a computing device using a GUI still feels less natural than other activities in our day.
But what if computers and their software were designed to fit in more with natural human behaviour, rather than the other way around? For years this hasn’t really been possible, largely because of the limitations of technology, but now a new generation of sensors and intelligent software means that natural interactions are more possible than ever, and we no longer need to rely solely on touching or clicking a GUI on a screen.
Such seamless interactions are often referred to as natural user interfaces (NUIs), and they could just sound the death knell for the GUI.
Can’t touch this
Touchless interaction doesn’t just open up the possibilities for more natural interaction however, it also provides a way of interacting during those times when you can’t touch a GUI – perhaps you have messy hands in the kitchen, are driving a car, holding a baby or are a surgeon carrying out an operation. With touchless interfaces we are now able to control a device and enjoy its benefits in nearly any scenario, many of which would not have been practical before. It also means that we can now interact with devices that have very small screens (where GUI interaction is particularly problematic), or even devices that have no screen at all. From talking to smart watches to making physical hand gestures at your TV, it’s the dawn of a new age of computing, where connected devices are even more integrated into our lives.
So how can we control a device without touching or clicking it? Witchcraft? Smoke and mirrors?
Most of the methods available involve sensors that are already built into modern smartphones, such as the camera, microphone, accelerometer, gyroscope, and wireless communication technologies such as Bluetooth or NFC. Below is a rundown of four key opportunities for interacting with a device without touching or clicking it, and some examples of their implementation.
Talking – it’s one of the main methods that we use to communicate with each other, so interacting with a device using your voice seems a natural step for software designers. Human speech and language is highly complex and hard for a machine to interpret, so although it’s existed in its infancy for a long time, it’s only relatively recently that the technology has become a serious contender for touchless interaction.
When Apple acquired Siri and built it into the iPhone 4S, voice interaction came of age – for the first time, people seriously considered talking to their computing devices instead of touching them. What’s particularly interesting about Siri is that it uses a natural language user interface, allowing the user to have a conversation with the device. Apple gives the following example: ‘When you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighbourhood.’ It allows for a more natural, conversational interaction. Although Siri isn’t perfect, it’s made huge inroads into that tricky beast: intelligent software capable of understanding human language.
Siri and other natural language interfaces such as Google Now move our relationship with computers away from typing long search queries and endless button clicking, and replace it with the kind of conversations that we might have with each other.
Voice control can be particularly useful in certain scenarios, for example, driving a car (Apple is working with various car manufacturers to integrate Siri into cars), in the kitchen when your hands are mucky (The Nigella Lawson recipe iPad app allows you to move through ingredients and recipe instructions with voice commands). As anyone who has used a touchscreen keyboard knows, it’s hard work and it’s easy to make mistakes, so voice input can offer a welcome relief from keyboard input – providing that it works flawlessly.
On the web, HTML5 introduces the Web Speech API, technology giving modern web browsers the capability of speech recognition. Combined with voice output from the system (iOS7 features the Web Speech Synthesis API), it’s now possible for a user to have a conversation with a website. The possibilities are endless, particularly on mobile, and uses like language translation and dictation are just the start.
The other growing importance of voice control is more extreme. As technology is getting ever smaller and more disseminated around our homes, there’s a need to control it in new ways. A device may be too small to have a screen or keyboard, and buttons may be impractical. For instance, Google Glass can be controlled through voice commands, which is, in theory, easier than fiddling around with the tiny buttons that live on the side of the device. Through the voice command ‘OK Glass’, the device starts listening and the user can then use natural language voice interactions to control the device. Google have also employed this ‘always listening’ feature in a recent phone, the Moto X, which can be woken up from sleep and used freely with voice commands, without the user ever going near the device, as demonstrated in this advert. The growing trend for smart watches such as the Samsung Galaxy Gear and long-rumoured Apple iWatch further underlines how technology is getting ever smaller, and requires ever more sophisticated voice controls.
Voice-based interaction does have one downside which I feel is a barrier – using it in public. Personally, I feel very self-conscious telling my phone what to do, but I’m sure this is something that will change as it becomes more and more common.
2. Accelerometer and gyroscope
Two sensors that are built into nearly all smartphones and tablets are the three-axis accelerometer and the gyroscope, which can be used to detect rotation and movement of the device. Usually used to switch between portrait and landscape mode based on screen orientation, they also have the potential for touchless interaction. One example is the Google Maps mobile app – if you angrily shake your phone whilst using the app, it opens up a dialogue box asking if you would like to give feedback – presuming that something went wrong. iOS has several accelerometer-based interactions built into the operating system, including ‘shake to undo’, where a shake movement gives the option of undoing the last thing you typed.Good gestures feel natural, as they relate to things that we do already, such as shaking your phone in frustration when it’s not working, or resetting something with a shake. More subtle uses of the accelerometer and gyroscope can also feel like a natural way of controlling it, e.g. tilting and rotating your device to steer in a driving game. Parallax.js reacts to the orientation of your device, allowing for tilt-based interactions in the web browser.
3. Cameras and motion sensors
Another opportunity for touchless interaction is by using imaging sensors such as a camera to interpret the world around the device. If a device can ‘see’, then it can offer new modes of interaction, such as physical gestures, motion tracking and facial recognition.
Facial recognition can be used as a security feature, eliminating the need for passwords and making unlocking a phone faster and more natural – just look at your phone and you can use it, no need for a passcode (although both this and Touch ID fingerprint scanners have been shown to be vulnerable to hackers). Samsung released a myriad of (albeit gimmicky) hands-free interactions with the Galaxy S4, including: ‘Eye Scroll’ (eye tracking technology that allows the user to scroll through a page just by moving their eyes), ‘Tilt Scroll’ (scroll a page by tilting the device), and ‘Air Gestures’ (control the phone using hand gestures). Whilst playing a video on the device, looking away from the screen can automatically pause the video you are watching, then resume it when you look back.
In the living room, Smart TVs are getting smarter by offering a whole host of touchless interactions. No longer do you need to hunt for the remote, instead the built-in camera recognises your face and logs you in to your profile (giving you instant access to your favourite content). Hand gestures can then control TV functions by swiping to navigate and grabbing to select, finally a voice command turns your TV off when you are done. It might sound like something from Tomorrow’s World, but amazingly it’s all possible from a £500 TV you can buy on the high street today.
Much of this technology was first brought into our homes by games consoles such as the Nintendo Wii or Microsoft Kinect for Xbox 360. The Microsoft Kinect features a combination of 3D depth sensors, RGB camera, and a multi-array mic, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. These allow the user to control and interact with it without touching a games controller. It was a game-changer that has brought touchless interaction into 24 million living rooms.
The Kinect hasn’t just been great for the living room however, its software development kit (SDK) has allowed programmers to use the Kinect’s sensors for a wide variety of scenarios outside home entertainment. ‘Kinect Hacks’ are posted almost daily on the internet, and range from art installations, music videos, controlling a robot, live musical performance, to vascular surgery. A thriving community like the Kinect ‘hacking’ community helps showcase the possibilities of touchless interaction.
PrimeSense, the makers of the Kinect’s motion capture sensors, released an interesting video (warning: very cheesy) to showcase Capri, their next generation of 3D sensors, which they claim will bring precise, fast motion tracking to everything – from laptops and TVs to elevators, robots and appliances everywhere. It looks like ‘Kinect Hacks’ are going mainstream.
Leap Motion is a 3D motion control device that purports to have 200 times greater sensitivity than Kinect. It’s a tiny device that can be plugged into any computer with a USB connection and enables touchless gesture interactions on the desktop, similar to the Kinect. We recently got one at The Real Adventure – it’s amazingly accurate and intuitive, and has real applications (beyond the initial sensation that you are Harry Potter or in Minority Report) and its own ‘app store’ full of tools and games to play around with. Autodesk have created a plugin so that you can use the Leap Motion to control Maya, a piece of industry-standard 3D graphics and animation software. By using the Leap’s sophisticated 3D sensors, you can manipulate 3D shapes on the screen using your hands, in a way reminiscent of a traditional model maker or sculptor – a natural interaction with all the benefits of using computer software (the ‘undo’ button being one).
Other motion controllers include: The MYO armband, which lets you use the electrical activity in your muscles to wirelessly control your computer and other devices. Haptix is a sensor that turns any flat surface into a 3D multitouch surface. Although not technically a touchless interaction, it’s another example of how we are moving away from traditional screens.
By giving a computer ‘eyes’, cameras and other motion sensors offer new types of interactions, from face recognition to gesture control. They open up the opportunity of interaction in new spaces, and provide a truly personalised experience. The next step will likely be a better understanding of context from the computer, by understanding who you are, where you are and what you are doing, and maybe even the emotions you are experiencing, computers will be able to use this contextual information to automatically adapt to our needs.
4. Bluetooth and NFC
Wireless communication technologies such as Bluetooth and Near Field Communication (NFC) allow for interactions based on proximity and location. For a long time now, we’ve been promised both jetpacks and contactless payments through our mobile phones. Like the personal jetpack, it’s never quite happened – but it’s close. Whereas countries like Japan have been embracing NFC-based payments for years, it’s never quite caught on in the West, partly because Apple has never incorporated NFC into the iPhone. But now, Bluetooth low energy (BLE) is being touted as the next big thing for contactless payments. By using BLE beacon transmitters and geo-fencing, consumers can be identified, along with their precise location. Marketed by Apple as iBeacon, BLE chips ship in most modern phones, so the support should be much wider than NFC, and it might just change the way we do commerce on the high street. Instead of queuing at the till to pay, the future may see our accounts automatically getting debited as we leave the store, and loyalty rewards offered based on our location history. Many see commerce without tills as the ultimate touchless interaction. Walk into a shop, order a sandwich, eat, walk out. No fiddling around with card machines and long queues for the checkout, as PayPal’s new Beacon solution demonstrates.
Bump, recently acquired by Google, is an app that allows two smartphone users to physically bump their phones together to transfer contact information, photos, and files to each other over the Internet. When bumping their phones, software sends a variety of sensor data to an algorithm running on Bump servers, which includes the location of the phone, accelerometer readings, IP address, and other sensor readings. The Bump algorithm figures out which two phones felt the same physical bump and then transfers the information between those phones. Bump makes transfers through software, while devices with Near Field Communication (NFC) chips transfer data through software and hardware. Its an impressively complex use of various technologies which all work together in sequence to deliver a really simple, touchless interaction for the user.
Wireless communication technologies are touchless by nature, but often activating them at the right time is the clumsy bit. Technologies like Bluetooth LE and NFC allow for data to be securely exchanged and managed behind the scenes, leaving people to get on with their lives. Exchanging files can become quick and seamless, and in retail, long queues can become a thing of the past with loyalty properly rewarded, improving customer retention.
So there you have it, four methods that we can use to create touchless interactions: voice; accelerometer and gyroscope; camera; and wireless communication. By replacing or enhancing our graphical user interfaces with touchless interactions we can make our products and services easier to use, feel more natural and become available in new environments, such as the living room or in the car. As with anything new there are mistakes to avoid, for instance Samsung’s touchless innovations on the phone feel more like gimmicks than solution to any real user need. A touchless interaction needs to be solving a problem, and enhancing a product or service, rather than complicating it. But where they employed well, such as in the Bump app, touchless interactions make a product or service more pleasant to use, and therefore make the consumer more likely to use it.
Now, why not use your Leap Motion to interact with the sharing buttons below, or leave a comment using your browser’s voice input feature?
How to win friends and influence people
The ‘Call to Action’ (or CTA), is a ubiquitous marketing term used to describe a graphic or piece of text that prompts a user to take a desired next step. In print, a CTA could be a graphic at the bottom of a piece of direct marketing inviting the reader to dial a Freephone number. In the digital world, Calls to Action are more often than not buttons that prompt the user to interact with them some way.
Among the digital marketer’s closest allies, the humble CTA button is relied on time and time again for mission critical purposes such as enticing users to find out more information about a service, join a CRM programme, donate to charity, or make a purchase. We’ve all fallen victim to a Call to Action at some point, clicking that ‘proceed to checkout’ button to buy some shiny new trainers, or tapping a ‘find out more’ banner and subsequently signing up for a newsletter. The success of a CTA is often closely scrutinised – and rightly so for such a key tool in the digital world – so it’s important to understand what makes a CTA a success or failure.
Continuing on from my article on link usability from a few months back, I’ve dug out some research and added my own thoughts to create a set of guidelines for making your CTA buttons effective.
1. Maintain a clear hierarchy
It’s important to give your Call to Action the weighting that it deserves and make it stand out on the page. This may sound obvious but all too often web pages use a single button style and important features end up getting lost amongst the page furniture. Simply put, a key button that you are measuring success against (such as ‘sign up’ or ‘checkout’) should never have the same weighting as the other buttons on the page.
Ideally, a webpage should have three distinct button styles, ranging from the most important, primary Call to Action, through to a slightly less important secondary button style, down to utility-style buttons that don’t need to stand out, but do need to exist on the page. Prominence can be given to buttons using a combination of size, white space, use of font and colour. The most prominent primary button style should be reserved for a single, key CTA per page – for example, Google uses a striking red button for ‘sign in’, while Twitter uses an eye-catching yellow lozenge for their ‘sign up for Twitter’ call to action.
Multiple CTAs with the same weighting on a page will compete with each other and fail to offer a clear path forward for the user, so it’s better to use a small number of distinct actions, and make it obvious to the user what you want them to do next.
Example of a clear CTA hierarchy in the Sold app
2. Supporting text is not a crutch
A clear Call to Action should function without the need to read all the text around it. Remember, users scan read the web, no one reads copy word for word, no matter how much effort you’ve put into writing it. So the wording of a CTA should make sense in isolation and explain what it links to. This is not only usability best practice, it also enhances your search engine ranking and improves the accessibility of your website.
3. Communicate value
Never make your visitors ask “What’s in it for me?”, because if they can’t implicitly see the reason for clicking your CTA, they’re unlikely to bother doing so. The Apple website’s product pages are a great example of how to successfully communicate value in CTAs. By adding a few more words, they clearly communicate to the user both what will happen when they click the Call to Action, and the value in doing so. So a link will say ‘Learn more about the design of the iPhone 5 >’ rather than simply ‘Learn more >’. Incidentally, these longer wordings have led to them using text links rather than buttons, but successful use of white space combined with colour keeps them prominent.
There is, of course, a balance to be made between concise wording and communicating value or explaining what will happen next. It’s always best to be as succinct as possible; too much wording will make your CTAs harder to scan read, less punchy and will affect conversion rates.
A clear Call to Action that communicates value on the Apple website
Copywriting for the web is a real art, and writing the perfect Call to Action is a huge challenge. CTAs need to be succinct, but most of all they need to be clear. There can be a temptation to take the button copywriting down an emotional, storytelling route that follows a brand’s tone of voice, but I’d recommend leaving this style of writing for the supporting copy. Instead, keep Call to Action wording as straight forward and as clear as possible. Users need clarity about what exactly will happen when a button is clicked, or it will lead to confusion and a sub-par user experience.
We highlighted this issue in some recent usability research carried out here at The Real Adventure, which found that users get confused by CTA wording that didn’t explicitly sign post them to what they were looking for. Luckily, this was easily fixed, but it’s something we should all avoid for the future. This article also shows how small changes to CTA copy can improve clarity, and in turn positively effect conversion rates.
Example of unclear button wording, image sourced from UX matters (‘Label buttons with what they do’).
5. Verbs get the job done
Verbs are integral to writing successful CTA copy because they encourage users to take action. Strong action verbs such as ‘call’, ‘watch’, ‘buy’, ‘register’, ‘compare’, ‘shop’ and ‘download’ will grab people’s attention and encourage them to act. Examples of action verbs in CTA copy include ‘Register now’, ‘Watch the video’ or ‘Buy now for £19.99′.
6. The gentle art of persuasion
Sure, it’s not pretty, but there are times when it’s necessary to ratchet up the sales patter a little, and aggressively go after that sign up or purchase. An example might be on a pay-per-click landing page where you only have limited time to convince your visitor to do something, in which case you can employ various techniques to make your Call to Action more compelling and increase its conversion rate.
Words that increase a sense of urgency, immediacy, ease and value will all help here. Urgency can be achieved by generating tension and excitement for your reader, employing CTA wording such as ‘now’, ‘immediately’, ‘today’ or ‘quick’. Immediacy can be reinforced by placing emphasis on time and availability, so you can stress that an offer is ‘for a limited time only’, ‘ends in 2 hours’, or that there are ‘only 5 items left’.
An emphasis on ease might mean spelling out that signing up for an account ‘only takes 60 seconds’, alleviating users’ concerns that it will be difficult or time consuming. Showing value could mean placing emphasis on a service being free or on special offer, removing perceptions of it being costly. By creating a sense of urgency and relieving users’ concerns, they should be more inclined to take action.
7. The secondary nudge
A secondary line of copy, positioned close to your CTA, can support your primary message. It can be used to alleviate users’ concerns about a service (as highlighted in point 6), or to feature a benefit that will encourage conversion. But as with any web copy, there’s no guarantee that it will be read due to how people scan read, so it should never be replied on. It can also add to the visual clutter of the page, so you need to be sure it’s definitely adding value. That said, a simple line under your CTA like ‘it’s free and only takes 30 seconds’ could be the difference between conversion and abandonment.
A line of copy to support a ‘Sign up with Facebook’ CTA on the Foursquare website
8. Icons can help
Icons can enhance a CTA by communicating a bit of extra information. A right-pointing chevron or arrow can help increase urgency and imply a positive forward movement, a left arrow implies going back, a padlock icon suggests a secure process, and a cog; some ‘under the bonnet tinkering’. In nearly all cases, graphics shouldn’t be replied on, they only exist to enhance CTA text.
A secure padlock icon as used on the Lloyds TSB website
9. Don’t disappoint
Once the user has acted on your CTA, make sure you deliver whatever you promised them and carry it through, otherwise you’ll lose them. Fast.
10. Test, measure and refine
The only way to really understand which design and word combinations yield the best results is to try them out on your audience using A/B and multivariate testing to see what works. Then test and refine again, an iterative approach will pay off in the long term. See the whichtestwon website for examples of testing strategy and results.
Sell, sell, sell!
So there you go – ten ways to make your CTAs more user-friendly. There are exceptions to every rule, but I hope that you find these guidelines useful. Now it’s over to you to go and create that million dollar CTA button. If you’ve got any more tips for creating great Calls to Action, please share them in the comments below.
Last Friday, myself and a band of fellow Creston employees embarked on an epic 185 mile cycling adventure from Bristol to London, as part of the inaugural Tour de Creston.
The Grand Départ was from One Glass Wharf, home of The Real Adventure and EMO, where cyclists of all abilities – from occasional commuters to competitive rouleurs – gathered to take on the challenge. We were cheered on by a crowd of supportive colleagues outside the office, a theme that ran throughout the tour, with Creston employees helping each other along the whole way.
The sun was shining as we set off and we began the tour in relaxed style on the easy-going Bristol and Bath Railway Path. After briefly getting lost trying to find the route out of Bath (thanks Pete) we did a Weller and found ourselves ‘going underground’, entering the longest cycle tunnel in Europe, a cavernous mile-long hole burrowing through the hills to the South of Bath. This was memorable not only because of the refreshing change in temperature, but also because of the dinner party music being piped through speakers.
After a quick stop at the least welcoming pub in England, the pace was cranked up a notch as we attempted to chase EMO’s formidable MD Peter Brown up the biggest climb of the day, Midford Hill, a drag up to the village of Hinton Charterhouse. We then settled down into small groups and worked our way out through the Wiltshire countryside, past the Westbury White Horse, up and over Salisbury plain and on to Stonehenge where we would stop for the night in Amesbury. It was an amazing sight to see 30 or so cyclists snaking through the hills, all wearing the same Creston team kit, whilst two Jags in full Tour de Creston livery followed us as support cars. It was as close to feeling like I’m in Team Sky as I’m likely to get.
We wound down the day in a nearby pub, where I witnessed more than a few people eat two main meals, one straight after the other, and then tuck in to dessert. Hungry work, clearly.
Day Two saw us up early and tucking into a Little Chef ‘Olympic Breakfast’ (as eaten by Jessica Ennis). We then tried to convince our already stiff and achy legs that another day in the saddle (a total of 95 miles) was a good idea. We didn’t get off to the best of starts when the route took us down a muddy track that was full of sharp flint, tearing a few tyres right open.
New tyres fitted and punctures fixed, we rolled into Hampshire and on to the office of Marketing Sciences in Winchester, where we were warmly welcomed with a cake selection that would make Mary Berry’s eyes water. Refuelled, it was time to get back on the bikes before the lactic acid built up in our legs; we still had the small matter of another 70 miles in the saddle, and things were about to get hilly.
The next stretch was a scenic meander through the South Downs National Park, and after a quick break in Petersfield, the real slog began with the amount of downhill never quite matching by the amount of uphill on offer. We eventually came out the other side to more gentle terrain, before the route played its trump card, Leith Hill – the biggest climb of the tour – 80 miles into Day Two. It’s a climb nasty enough to be featured in the legendary ‘100 Greatest Cycling Climbs’ book, and posed an interesting challenge to those with tired legs. Leith Hill done (along with the odd expletive), we enjoyed the view at the top and bolted down the other side into Dorking, and on to our final destination, Leatherhead.
It felt great to have survived Day Two, by this point we all knew that the hard work was done, and relaxed in the knowledge that Day Three would be the ‘easy one’. After a few victory pints and tasty Italian meal it was off to bed for a well-deserved rest.
Day Three started off with breakfast in the local Wetherspoons (yes we are classy), and before I could so much as digest a sausage, it was time to take on Box Hill. Famous as the venue for the 2012 London Olympic road race event, Box Hill is now the native habitat of the lesser spotted polka dot MAMIL, with legions of plump men and women whizzing up and down it. The Creston gang showed the locals how hill climbing is really done, with a dizzying display of dancing on the pedals and some rabid handlebar chewing.
Having claimed glory it was time for the home stretch, an easy run in to London via Richmond Park and along the Thames. Once past the Houses of Parliament and Buckingham Palace we reached our final destination – Creston HQ in Soho, after a total of 185 miles. Creston put on a fantastic rooftop BBQ for us and, after a whistle stop tour of the very impressive new TMW office, we all sat back and reflected on our achievements for a minute or two.
It was an incredible effort all round, from those who just rode for an hour to those who completed the entire tour, I witnessed people of all abilities pushing themselves to the limit and smiling the whole way. It was also great to meet members of other Creston agencies and find out more about what they do. Special kudos to my fellow Real Adventurers, Amy and Enrique, who completed the full distance with me; Enrique’s effort being particularly impressive considering he rode the whole thing on a £60 bike from a supermarket! Proof that if you try, you can.
A huge thank you to the organisers of the tour who put in so much effort and made it such a professionally organised event, Justin Moody (who selflessly gave up his place cycling to drive the support van at the last minute), the team at EMO for all their hard work, and the volunteers who drove the Jaguar Sportbrakes and saved many a cyclist in their hour of need…thanks so much! A heartfelt thank you also goes to Creston and our respective companies for allowing us to take some time out from work and for keeping us well fed and watered throughout the tour.
A real adventure indeed – bring on Tour de Creston 2014!
If you can, please donate to The MS Trust, the chosen charity of the Tour de Creston and a fantastic cause: http://www.justgiving.com/TourDeCreston2013
Here’s a post I wrote for The Real Adventure blog recently.
How one man’s dream became a billion links to kittens
Have you ever had to describe the World Wide Web to someone? Probably not, or at least not since about 15 years ago, when a less technologically savvy family member asked you to.
It has become such a constant part of our lives that it’s easy to forget exactly what the Web is. We’ve come to relate to it in terms of the content we view on it; news, Facebook, funny videos on YouTube, cute pictures of kittens and so on, rather than the medium itself. This is in much the same way that we might think of television in terms of watching EastEnders or the X Factor final an Adam Curtis documentary, rather than as a box for receiving moving images and sound.
So what is the Web then? Yes it’s Facebook and funny kitten videos, but what is that content fundamentally made up of?
Tim Berners-Lee proclaimed, back in 1991, that the ‘reader view’ of ‘The WWW consists of documents, and links’. Brilliantly simple – no wonder it caught on.
As well as connecting the documents of the web together, links are the work horses of the Web – they act as sign posts, showing us where to go. They build our ranking on search engines, they persuade us to do things, help us learn, connect with each other, and add context and citation.
So, for such a key element, it’s surprising how often they under-perform in terms of usability.
The first rule of Link Club
So what are the rules for using links? How can we make them perform better? As a set of guidelines to help improve the usability of your links, I’ve dug through some research on the topic and added my own thoughts.
First a quick disclaimer – there are many types of links; navigation, calls to action, pagination, buttons and so on, but in this article I’m just talking about the use of links in body copy; the type of content we might all enter into a CMS or blog at some point or other.
1. Never use ‘click’ or ‘here’. Even worse: ‘click here’.
Not only is the phrase ‘click here’ device-dependent (it implies the use of a pointing device such as a mouse), it says nothing about what will be found if the link is followed. Instead of ‘click here’, link text should indicate the nature of the link target, as in ‘more pictures of funny cats’ or ‘text-only version of this page’. Just remember that while some of your readers may be using a mouse, they could equally be using a screen reader, voice control, keyboard or touch screen, or could have printed your web page on to paper.
A general rule is to avoid talking about the mechanics of the Web where possible – if you owned a shop, you’d write ‘Welcome’ on the door, not ‘Open this door to enter the shop.’ Using the word ‘here’ also conceals what the user is clicking – the link should ideally describe what it links to. More on that shortly.
2. Be descriptive.
This follows on from Point One – links should describe what they link to, or do. If it all possible, the wording of a link should act as a page title for the content that you are linking to. The user should be in no doubt about what will happen if they click a link, thus avoiding them having their expectations dashed. We shouldn’t rely on the text around a link explaining it either, as we need to optimise for scan reading (see Point Three).
Another reason to be descriptive is for search engine optimisation (SEO); a descriptive link will affect how that link performs in search engine rankings.
Sometimes we need some extra information in our description – is it a 5kB or a 50Mb PDF? A 30 second or a 120 minute video? Help avoid nasty surprises.
3. Optimise for scan reading.
A common misconception made by authors is that their audience will read almost every word on the page, but sadly this is far from the truth. According to Jakob Nielsen’s research findings (‘How little do users read?‘) users can be expected to read just 20% of the words on a page. This is because people don’t read on the Web, they scan. Scanning visitors cast their eyes rapidly over a page, picking out pieces of information to build their own mental picture of what the content is about. To the scanning eye, links (being underlined and in a different colour) stand out on the page, which means that their wording will be used to help make those quick decisions on whether to stay or leave the page. So in short – make them count!
Remember that the words you link will give readers a feel for the subjects covered in the page, so be careful not to unintentionally skew this by linking lots from one topic or keyword and not another. Try to balance your links so that a visitor can scan read your page and get a genuine feel for what it is about.
It’s also a good idea to keep the number of links on a page down – too many will disrupt the user’s ability to scan read the page easily, making the copy harder to read, and ultimately losing you visitors.
4. Keep it simple, stupid!
Keeping things simple and familiar to the user are good ways of making them easy to use, and links are no exception. Links should look like links; underlined and clearly differentiated from other text through the use of colour (traditionally blue).
They should have appropriately sized hit areas; this has become more of an issue since the recent proliferation of touch screen devices. How often have you tried opening a link on a smartphone or tablet and accidentally got the one next to it? Links need appropriate spacing around them to avoid this, and hit areas should be large enough to click easily, as dictated by Fitts’s Law.
Links should nearly always open in the same window. The user can choose to open a link in a new window or tab if they like, but you shouldn’t make that decision for them. Don’t remove choice, keep the user in control.
5. Don’t radically alter link behaviour.
As I said at the start, hyperlinks date back to the dawn of the Web. And from that point up to today, we’ve got used to using them. So if you decide to alter how they work, you are going to confuse people by confounding their expectations. Adding things like adverts that appear on rollover ruin the user experience and gain you very little in return. Please promise me that you won’t use them?
Go forth and link to kittens
So there you go – five ways to make your links more user-friendly. There are exceptions to every rule, but I hope that you find these guidelines useful. If you’ve got any more tips for improving the usability of links, please add a reply below.
A project that I’ve been involved with has just come to fruition: a new smartphone app that helps mums-to-be get ready for the rather daunting task of giving birth.
The app is designed to help expecting mums feel confident and prepared for labour by giving them practical tools and expert advice; with features such as to-do checklists, timely tips from medical experts and experienced mums, a contraction timer and useful articles. The app also includes a personalised birth announcer so mums can share the news of their new arrival with the world; upload a photo of their newborn, add a message, choose a visual theme and share with family and friends.
Features and content for the app were identified by conducting user research; i.e. talking to recent mums to discover what they would have found helpful during pregnancy, and then discussing and validating our ideas with health care professionals, such as midwives and pregnancy advisors. I was running the UX side of the project so was very much instrumental in shaping its features and functionality, as well as designing the UI. It’s been a great project to be involved with – kudos to the brilliant team at The Real Adventure. It’s currently ranked number 3 in the app store under the health & fitness category.