The final word on ‘the fold’ debate

There is a piece of feedback that strikes fear in the heart of any designer: “But what about the fold?”. For many people in the design community, this issue is dead and buried. For clients, less so. But what’s the truth? Are designers being too dismissive of this problem? Are clients misinformed? As with many things in life, the truth is not as clear-cut as either side might wish to believe.

Here’s the final word on the debate. For now.

“Here’s the final word on the 'Below the fold' debate. For now.”

What is the fold?

The ‘fold’ is an old concept from newspaper publishing that said the most high value content should be on the top half of the newspaper’s front-page design. This is because newspapers were typically displayed for sale folded in half, with only the top half visible to the passing customer.

It is used in the context of web design to refer to the area of the page that’s visible without any scrolling. It is assumed that the same logic applies – if you can’t see what you are looking for straight away you might leave, particularly as people don’t tend to scroll. But what’s the truth?

Do people scroll?

One of the arguments against the fold being a problem on web pages is that people scroll, so we don’t need to be too concerned about what they see on first view. This is true – evidence shows time and time again that people do indeed habitually scroll down when they visit a webpage, so we shouldn’t be concerned that people wont scroll (*see caveat below though!).

Is there even a fold?

Today, websites are viewed on many devices, with huge variations in screen size. Many people think that this variation has made the fold debate obsolete. However, despite this variation, responsive web design actually means that all devices end up seeing very similar variants of the header area of a webpage.

* When is the fold a problem?

There is an occasion when people do not scroll. This is when the user incorrectly perceives there to be nothing more to see on the page than what is currently visible in their browser viewport. This is rare, but is sometimes the result of design cues that discourage scrolling – stark, horizontal lines at the bottom of the screen (a ‘false floor’), or designs that perfectly ‘snap’ to the browsers viewport.


The fold is not an impenetrable barrier. People will scroll down*.

We do not need to try and put everything above the fold. However we need to communicate value as quickly as possible, to keep the user on that page and stop them from leaving.

So what they see above the fold is still important – as it sets expectations for the rest of the page. For example, a page headline that successfully communicates the value of staying on the page is incredibly important as it will help stop people leaving the page and encourage them to explore by scrolling down.

People do scroll, but they also do bounce / exit if they don’t quickly see value or aren’t where they thought they were.

The further you put something down the page, the fewer the people who will see it.

This is a numbers game where drop off rates rise the further you get down the page – something scrollmaps in Hotjar (analytics tool) show us quite clearly. Therefore page hierarchy is important – typically, high value content should be at the top, lower value content at the bottom.

Does this mean that ‘long’ pages are a problem? No – interesting content is what keeps someone on a page, and keeps them scrolling down. For example, our new Hospital Bag checklist page for C&G Baby Club has 17min+ page times from PPC.

A page should be as long as we can keep it, without losing the interest of and relevance to the audience. Pages are only too big when they become slow to load – but even then clever tech can negate this problem.

  • So does the fold matter? Yes, in the sense that it’s important to make a great first impression, and to let people know where they are.
  • Do people scroll? Yes, unless you trick them into thinking there’s nothing more to see.
  • Can a webpage be too ‘long’? Only when we’ve run out of things to keep it interesting or relevant.

You’ve made it this far

Now, let’s promise to never talk of the fold again, and instead focus on making web pages awesome for our users. Thank you.

Posted in analytics, design, psychology, research, UI, Usability, UX | Leave a comment

Stop saying that word ‘Millennial’


Barely a day goes by without a new report claiming to provide insights into the surprising habits of so-called ‘Millennials’.

Articles such as ‘How Millennials are changing the face of retail’, ‘Millennials and mobile: what marketers need to know’, and ‘Over 80% of millennial generation see mobile TV content as essential’ pepper the marketing press. A search of the Econsultancy website returns 13,900 results for ‘Millennial’, suggesting that there’s a lot to discover about this generation.

Get closer

Why the obsession with ‘Millennials’? Anyone who works in marketing needs to understand their target audience if they want to communicate effectively with them, and businesses see young people as both a huge opportunity and a tricky challenge. Understanding them, and appealing to them, is key.

However, the problem with obsessing about ‘Millennials’ is that it risks creating distance between businesses and their audiences, rather than a deeper understanding. Occasionally it is useful to talk about a generation as a whole (‘baby boomers’, ‘Gen X’, etc.), but the downside of doing so is that it involves describing huge swathes of the population and their habits with the broadest of brushstrokes. Herein lies the danger: ‘Millennials’ don’t actually exist, but people do.

3 reasons to stop saying ‘Millennial’

1. It creates distance, not empathy

‘Millennial’ conjures up a caricature of a person who spends all day on Snapchat sharing selfies whilst watching PewDiePie and Zoella on YouTube. Sure, these things are popular, but the cliché of the ‘Millennial’ seems to get more absurd with each article shared within the marketing industry, each adding another layer of veneer to the myth of the ‘Millennial’.

If we believe everything we read about this generation, they become so distant from the rest of the population that you might wonder if they are even human at all. To illustrate this, someone’s made an amusing Chrome Extension that replaces the word ‘Millennial’ with ‘Snake People’ in web pages, exposing the absurdity of journalism that describes a generation as being so different to the rest of us that they may as well be aliens.

2. Most people don’t even know what ‘Millennial’ means

Within the marketing community, ‘Millennial’ seems to have become a byword for people born around the year 2000; ‘digital natives’ who grew up with the web and smartphones; or, in the worst circumstances, simply ‘people younger than me’. However, according to Wikipedia, ‘Millennials’ or the ‘Millennial Generation’ actually refers to Generation Y, ‘the demographic cohort following Generation X’, with ‘birth years ranging from the early 1980s to the early 2000s’ – so a ‘Millennial’ could be 35 or they could equally be 15.

This lack of definition makes it a dangerous phrase as it could be interpreted in wildly different ways. Someone in their mid-thirties and someone in their teens are so likely to have such different behaviours and worldviews that it’s not particularly useful to talk about them in the same breath.
We need to be more specific about who we are describing if we want actionable insights.

3. Assumptions are dangerous

Judging by the amount of articles available, there is an insatiable appetite for information about ‘Millennials’. Skimming these stories you can build up a picture of the ‘Millennial’ generation, but this linkbait-fuelled myth is likely to be wrong. A while back I attended a great talk by Dan Healy, User Experience Consultant at Nationwide Building Society.

He recalled the tricky task of recruiting 11- to 17-year-olds to use Nationwide’s new FlexOne young person’s current account. His research with this audience debunked many of the clichés of what you might think ‘Millennials’ want and how you should communicate with them, and highlighted the dangers of making assumptions about their behaviours. For example, something as seemingly dated as signing a form with a pen to open a first bank account was seen by young people as a right of passage and a key moment in becoming an adult, not a symptom of a bank out of touch with young people or technology.

Because we’ve created the myth of the ‘Millennial’, a species oh-so-different to the rest of us, confirmation bias comes into play. We are willing to believe the crazy articles we read as it confirms our existing belief that ‘they’ are different from the rest of ‘us’. This helps explain the huge amount of demand for articles about the habits of ‘Millennials’ online, and the competition to make more and more outlandish claims about this generation. This Millennial Insight Generator mocks this, by generating randomised ‘insights’ about their habits, ranging from the banal to the outrageous.


Don’t be that guy

My hunch is that there are a lot of (middle-aged) marketing professionals out there who are panicking, as they feel out of touch with younger audiences, and are willing to lap up the myth of the ‘Millennial’. The generation coming through are important to every business. They are the new workforce, the new consumers, the new parents. We must connect with them or risk failure. However lets do that by speaking to and understanding them, not by reading link-bait hyperbole online.

Being more specific about who we are talking about (e.g. ’17-21 year old British females in full time education looking buy their first car’) results in tangible insights that just aren’t possible when you use blanket terminology such as the ‘Millennial generation’. My advice is any victim of the ‘Millennial Bug’ is to get out of your office and open up your ears – hang out with people who aren’t like you (or your colleagues!). Commission proper, targeted research with your customers and understand them. User research doesn’t have to cost the earth, there really is an option for every budget; and you might just be surprised by what you learn.

Posted in research, social media | Leave a comment

Google just open-sourced AI – 5 ways this changes everything


We’ve said before that the future of UX is Artificial Intelligence because of the huge possibilities it opens up, but for most projects the benefits of AI have been out of reach, due to the (not unsubstantial) cost and technical implications. So, last Wednesday, when Google announced the Cloud Vision API, there was excitement among the more geekily-inclined Real Adventurers.

Suddenly huge AI potential is at our fingertips, for any project.

What is Google Cloud Vision API?

Google is seemingly on a mission to open up AI and related technology, such as Deep Learning, to the masses, through initiatives such as TensorFlow, the fruit of the brilliantly named Google Brain team. The Brain team have flexed their collective cerebral cortex once more and given us Cloud Vision, an API that allows your app or website to understand the content of images.

In Google’s words, ‘it changes the way applications understand images’.

For the less technical among you, an API is essentially a service that sends data back and forth over the Internet. Google’s APIs allow anyone to access the power of their cloud-based supercomputing. Want your app to include maps or directions? Use the Google Maps API. Want to harness the capabilities of Google Search on your website? No problem, use the Search API.

The Cloud Vision API allows your website or app to send images to Google’s servers and, in turn, receive data that describes the content of the image. So send it a photo of a beach, and its computer vision technology will analyse it and tell you that it contains a palm tree and a sun, almost instantaneously.


Anyone who has used the brilliant Google Photos app will have experienced the clever tech behind the Cloud Vision API. Google Photos automatically categorises the thousands of photos on your phone (the ‘average’ person takes 1,800 photos a year with their phone), allowing you to sort through your pictures in new ways. For example, Google Photos will automatically group all the photos of your cat together, so you can ‘paw’ over them at your leisure. This is only possible because the software knows what a cat looks like – and that’s the key to the power of the Cloud Vision API – it allows software such as websites and apps to ‘see’.

An example of Google Photos' automatic categorization

An example of Google Photos’ automatic categorization


5 reasons this technology is awesome:

1. Object recognition

The biggest deal has to be the potential uses for object recognition in photos. See something you’d like to buy? Point your camera at it, and then find it for the cheapest price online. Maybe you have a healthy eating app – point your camera at a food item and see its likely nutritional values. Is the food safe to eat in pregnancy? The possibilities are vast and the experience should feel effortless for the user.

2. Facial detection

The API allows you to detect multiple faces within an image, along with the associated key facial attributes like emotional state or wearing headwear. Although Google have made it clear that it can’t personally identify people’s faces due to privacy issues, meaning it can’t be used for personalisation, the ability to detect people and their emotions still has great potential. Automated chat AI could harness this capability and respond differently, based on the user’s likely emotional state. Support lines could prioritise support queries from the most irate customers – or make them wait so they can cool down. You decide!

3. Make your products faster and more useful

There’s a tendency to think big with these new bits of tech, but I think it’s a good idea to think small as well. AI-powered micro-interactions could be an opportunity to make an interface more useful, faster and a delight to use. For example, if your coffee-themed app has a ‘share your latte art’ feature, why not suggest photos of latte art from the user’s phone or computer, instead of making them wade through all of their photos looking for them? Developers could also use the API to add metadata to their image catalogues to make it easier for people to find what they are looking for.

4. Moderation

Moderation isn’t at the top of many people’s lists, but it remains important. On a large community or user-generated content project it can be a costly overhead, particularly if it means people trawling through millions of images looking for photos that break guidelines or terms and conditions.
The Cloud Vision API can tap into Google’s SafeSearch functionality and flag photos with inappropriate content (e.g. pornographic or violent). It can also detect popular product logos in photos, potentially useful in scenarios where logos or brands aren’t allowed in a competition entry. The API even has the ability to detect text within images, along with automatic language identification – another potentially useful tool in the moderation of user-driven content.

5. Beyond apps and websites

Google has made it clear that this technology isn’t just for websites and apps. Drones, robots, automated cars and the whole Internet of Things can benefit from being able to see and understand what they are looking at. A robot could approach someone smiling, but avoid someone aggressive (but let’s avoid building Robocop please).
Sony is already using the technology to process millions of pictures being taken by its Aerosense drones.

The boring (but important) bit

As always, we need to remember and respect people’s right to privacy and data protection, and let them know that their content will be processed by or stored on Google’s cloud servers. But if we use it in the right way, and lead with the benefits, it should be a no-brainer. Google also needs to unveil the pricing plan for the product – it’s likely to start off free and then have tiered pricing for ‘enterprise’-level access to the API (the more you use it, the more likely it is you will have to pay for it).

Get inspired!

We are only just getting started with AI. As more of its power becomes publically available through APIs and open source software, consumers’ expectations will start to change. What seems pie in the sky today will be commonplace tomorrow. It’s an exciting time to be working in this industry.

Check out this video from the Google Cloud team to get those ideas flowing…

Posted in AI, apps, Futurology, mobile, photography, technology, UI | Leave a comment

8 tricks of psychology for better customer experiences

8 tricks of psychology for better customer experiences

At The Real Adventure Unlimited, we design for people. Every day, we plan, design and develop products and services that people interact with. In order to do this effectively, a deep understanding of the audience is key. By tapping into the psychology of our users, we can make work that is not only effective for our clients, but also offers a delightful experience for our customers.

Human psychology is a big, often daunting topic, but for those involved in design, strategy, sales or development, the importance of a basic understanding of how we think and why we act in certain ways can’t be underestimated. In this article I list eight quirks of human thinking and behaviour that can help create magical customer experiences.

This blog post is a written version of a learning lunch that I put on at The Real Adventure Unlimited office recently. These lunches are great opportunities for sharing knowledge with each other, and clients.

Cognitive biases

Cognitive biases are nuggets of easy-to-understand psychology goodness. They explain fascinating human tendencies to think and act in certain ways. They are the result of our evolution, culture, and environment. Our brains are hardwired with automatic responses to situations, which speed up problem-solving and decision-making, so we need not consider common situations afresh every time we encounter them. As well as helping us, they can hinder us, as sometimes these biases prompt us to act in seemingly irrational ways. This means they are open to abuse by evil forces in marketing, but we prefer to use them for good, to give the customer a better experience. After all – no matter how smart we think we are, we’re all susceptible to cognitive biases.

There are many cognitive biases, with more being discovered all the time – Wikipedia lists them in the hundreds. Here are just eight cognitive biases and how we can use them in our work.

1. Goal-gradient effect

A coffee shop loyalty card with 12 boxes, two of which are pre-stamped, will be filled quicker than a card with 10 boxes and no pre-filled stamps. They both require the customer to get 10 stamps, so why? It’s because the first card gives them the illusion of progress, and progress is motivating. The goal-gradient hypothesis says you will accelerate your behaviour as you near your goal.

This effect was identified in 1934 by Clark L. Hull, who found that rats running in a maze would get faster, the closer they got to their food. Knowing about this cognitive bias is useful in several ways:

  • The shorter the distance to the goal, the more motivated people will be to reach it
  • Even the illusion of progress is motivating
  • People focus more on what’s left than what’s completed
  • People enjoy being part of a reward programme
  • When starting customers on a loyalty scheme, giving them a head start will help them move through it quicker

The research:

2. Choice paradox

Is more choice better? You may assume that people like to be given lots of choice, but it turns out there’s a limit to how many options humans like to cope with. What’s more, giving customers too much choice could hurt your business.

In 2000, psychologists Sheena Iyengar and Mark Lepper conducted an experiment, selling jam at a food market stall. One day they had a choice of six jams, and on another day they had a choice of 24 jams. Consumers were 10 times more likely to buy when offered only six options than 24; and, intriguingly, they reported greater buying satisfaction. Other studies have shown that in speed dating, you are more likely to select a match with six dates vs. 10. It seems that the more choice people are given, the more likely they are to fail to make a decision.

Some people have since pointed out that Starbucks has 80,000+ drink choices, and yet they seem to be doing alright for themselves. While this may be true, I’d point out that Starbucks shows only a small number of choices on their menu, but they allow the customer to customise their choice as they see fit. Choice is hardest for those who are undecided and most paralysing for the customer when all the choices are displayed at once.

People are happiest when they feel in control, and having choice is a big part of that, but we must remember not to paralyse our customers with too many choices, as they will vote with their feet and go elsewhere.

The research:

3. Aesthetic-usability effect

The aesthetic-usability effect is a bias whereby users perceive more aesthetically pleasing designs to be easier to use than less aesthetically pleasing designs, even if in reality they aren’t. Aesthetically pleasing designs have a higher probability of being used, reminding us of the importance of investing in great visual design. If we want better engagement with our work, then we need to ensure it’s as aesthetically pleasing as possible to the target audience.

We know that design that is pleasing to the eye gives our customers greater confidence in their ability to use (or learn to use) our products. This is no secret to anyone who works in design; brands such as Apple have used this cognitive bias to build billion-dollar companies. As legendary designer and psychologist, Don Norman says, ‘Attractive things work better.’

By investing in great design and coupling it with consumer insight, we can apply the aesthetic-usability bias to features that appeal to the target audience, and ensure new products and services are successful.

The research:

4. Anchoring effect

The anchoring effect is a cognitive bias that describes the tendency for people to rely too heavily on the first piece of information offered (the ‘anchor’) when making decisions.

An example of the anchoring effect can be seen in credit card statements, where you are invited to make a ‘minimum payment’ each month. Research has shown that without these suggested minimum payments, credit card debts are paid off much quicker, with the customer paying as big an instalment as they can. But it is in credit card companies’ interests for customers to stay indebted to them for as long as possible, so they’re deliberately anchoring their customers with the suggested payment. This tempts customers to pay off a small amount, rather than allowing them to make up their own minds on how much to pay each month.

Anchoring is often employed by charities, which provide suggested donation amounts, such as £5/£20/£100 options. The higher donation suggestion (£100) works as an anchor, so the donor is likely to choose the middle donation amount (£20), rather than the lowest one. Restaurant menus use similar tactics, using a single high-cost item on the menu as an anchor to encourage customers to spend more on food.

Studies have shown that anchoring is very difficult for people to avoid, so decision-makers need to remember that customers will be anchored by pricing structures and other information, whether you intend them to be or not.

The research:

5. Surprise heuristic

Whenever we speak to users of our CRM programmes, it’s often the unexpected or unusual aspects of the programme that they remember most vividly, and speak about most positively. The careline you can call 24/7 and get great support from; the pack a mum received in the post that contained bubble bath and suggested they take a break from their busy day and pamper themselves. People react well to surprises, so we need to remember to build them into our work. What’s better than getting a gift in the post from a loved one, when bills and pizza menus are all you usually find on your doormat?

Like many cognitive biases, our tendency to react well to surprises comes from our evolution – in order to survive as a species it’s served us well to seek out and explore new opportunities, to go exploring over the horizon and find new resources. Punctuating the monotony with a pleasant surprise can be the key to creating a great, memorable customer experience.

The research:

6. Social proof

The other weekend I was in Weston-super-Mare – the ‘jewel’ of Somerset. I was with some friends, and we fancied getting fish and chips. Faced with a bewildering choice of chippies on the seafront, and a lack of local knowledge, we instinctively went to the one with the longest queue. Meanwhile, the chip shop next door had virtually no queue and might have had better food on offer. Why didn’t we just go to the one with the shortest queue?

The reason for our irrational behaviour is our desire for social proof (aka ‘informational social influence’). When humans are undecided, we tend to follow the patterns of others. Again, this is innate; following others is an evolutionary safety mechanism. By drinking from the wrong stream, we might get ill. If we go to where others get their water, we’ll probably be ok.

Building social proof into our digital experiences can be a powerful way to tempt the undecided to act – as the final trigger in motivating someone to make a purchase. ‘Our bestseller’, testimonials, likes and reviews are common ways of providing social proof, but harnessing the power of social networks such as Facebook can give a more powerful way of doing this – knowing that 10 of your close friends bought an item can be a more powerful signal than reading 200 reviews.

The research:

7. Peak-end rule

Rather than judging an experience in its entirety, humans have a tendency to judge an experience by its peaks – its most intense points and its end. So even if your two-week holiday sat on a beach in the Caribbean was pleasant but uneventful, having your luggage stolen on the way home may cause your long-term memory of the holiday to be negative. This may seem illogical and, like much human behaviour, it probably is, but it’s something we should consider when designing customer experiences.

At Disneyland, waiting times for attractions are deliberately overestimated, so when customers reach the end of a long queue, they are pleasantly surprised that it didn’t take as long as they were expecting. Suddenly, the tediousness of queueing becomes a positive experience.

Often our customers have to do things that they might not necessarily want to do (like queuing, or registering on a website), but they will quickly forget these niggles if we can create positive peaks and ends that punctuate the experience. Instead of allowing a CRM programme for new mums to tail off, why not end it on a positive note by giving them a book that shows their journey from pregnancy to having a toddler? This positive experience at the end will leave a lasting good memory of the brand, encouraging them to recommend the brand to others or use it again. We should always consider where we can create peaks in a customer journey, or how to end it on a positive note.

The research:

8. Scarcity heuristic

Scarcity heuristic is a bias that places value on an item, based on how easily it might be lost, especially to competitors. Scarcity can lead to snap decisions to buy, hence supermarkets selling out of bread due to ‘panic buying’ during a perceived time of shortage. In our office, similar behaviour is triggered by a company-wide email with the subject line ‘CAKE!’ on someone’s birthday. As you might expect, this defensive behaviour is born out of our survival instinct.

As we can’t help but value scarce items, shops have been using scarcity as a way to drive sales for as long as anyone can remember. ‘While stocks last’, ‘This week only’, ‘Last one!’ are all seen on the high street, but scarcity is also a powerful weapon in digital. Invitations to join services Gmail, Google+, and, more recently, the ad-free social network, Ello, have deliberately been made scarce to encourage demand for them, and increase the value placed on them.

Travel websites such as and Expedia make much use of scarcity signifiers such as ‘Only 1 room left’, later sending emails reminding you that you have ‘Only 2 days left to review’. It’s worth remembering that such signifiers add to the cognitive burden for the reader, and need to be shown only at the right time, as a nudge to motivate them to act. Scarcity can be a powerful ally – but it can just as easily annoy your audience.

The research:

Jerry Springer’s final thought

So there you have it, eight quirks of human behaviour that we can use to create better customer experiences. But before you start rubbing your hands and considering how to use these cognitive biases to trick your customers, remember that, in the long run, customers are likely to form a negative impression of a brand if they feel they are being deceived. Let’s not forget, we’re all vulnerable to the power of cognitive biases.

However, if we use an understanding of psychology to create a good experience for customers, success will follow.

So, until next time, take care of yourselves… and each other.

Posted in psychology, strategy | Leave a comment

Humans need not apply

Posted in film, Futurology, technology | Leave a comment

IPIA Conference

I’m excited to be speaking at the IPIA Conference tomorrow, where I’ll be talking about Natural User Interfaces and touchless interaction.

More info here:

Posted in Futurology, Talks, UI, UX | Leave a comment

The Nouvelle Vague

Link bait

‘What I learnt about these headlines shocked me to my core. Here’s what you need to know.’

Link-bait. We’ve all fallen victim to it at some point. Those links that are deliberately vague, yet subtly claim that they will somehow change our lives if we act on them. They leave us hanging and make us feel compelled to learn more – because as humans we just can’t help but be curious.

What makes it link-bait?

Link-bait is a term for links to content that are in some way deliberately provocative. I define link-bait as link copy that has been purposely written to target and exploit our natural curiosity. It’s a tease – often deliberately vague, whilst simultaneously claiming to offer great value to the reader. It is the opposite of the pursuit of clarity, often cited as a usability best practice, where the user knows what to expect when clicking a link.

As well as being deliberately vague, link-bait often employs devices such as:

  • Outlandish claims about the emotional power of the content (…if you don’t cry when you read this, then you’re not human).
  • Implying that something wholly unexpected happened (…and what happened next blew my mind).
  • Lists with attention focused on a particular item (25 examples of awesome cats. Number 7 is my favourite – WOW!).
  • A two-part narrative (this seemingly unimportant event happened. But actually, it was completely life-changing).
  • Implying that your current view of the world will be challenged by what you read (hint: it’s not what you think).
  • That the content is somehow unmissable, thus triggering our ‘fear of missing out’ (here’s what you need to know…).
  • Use of timeliness to instill a sense of urgency in the reader (…and we don’t have long to stop them).

Curiosity killed the @

By exploiting our natural weakness for curiosity and our desire to get answers, click-bait gets our attention, which means it’s fast becoming the de facto language for publishers on social media. On spaces like Facebook, where attention spans are short and the desire for content to get noticed (and shared) is high, feeds have become flooded with link-bait.

Whereas tabloid newspapers learnt that headlines such as ‘FREDDIE STARR ATE MY HAMSTER’ sells copies of The Sun, it seems that publishers on Facebook (Upworthy, Huffington Post, Sumofus, Buzzfeed, Viral Nova et al) have found that ‘I can’t believe what this celebrity did. You won’t either. (Hint: it’s not what you think)’ works for them.

But why do we find these vague links so hard to resist?

Feeding our dopamine

As a species, it’s our inherent curiosity that’s taken us from cave dwellers to landing on the moon and exploring the depths of the oceans. Dopamine in our brains causes us to want, desire, seek out, and search – it’s a chemical that compels us be curious. Dopamine has been the key to our evolutionary success, keeping us motivated to explore our world, learn and survive. It’s this same chemical that means that we can’t resist the dangling carrot of click-bait.

Dopamine has made us a species addicted to seeking information, so grazing Facebook and finding intriguing links, which we can quickly consume, share, and forget about, is the perfect dopamine-enhanced experience. Dopamine is so motivating, and clicking links is so easy, that often we’d rather click and know what a link is about, even if we are already pretty certain that it doesn’t interest us.

Our dopamine system is further stimulated by unpredictability, so giving a vague description of what the link is about sends our brains into dopamine overdrive. Brain scan research shows that our brains are more stimulated by anticipating a reward than actually receiving a reward. When we see link-bait we get a big dopamine hit trying to guess what it is, and quickly move on and forget about it having clicked and learnt what the link was actually for. Add to this an emotional hook (‘This shocked me to my core’, ‘If you don’t cry when you read this, then you’re not human’, etc.), and we can’t help clicking.

It’s no wonder that for a lot of publishers, being vague is now seen as a key part of their formula for creating the perfect viral content.

Crying wolf

Link-bait gets clicks. Because of this, it’s becoming more and more prevalent, particularly on social media feeds such as Facebook. To get noticed, publishers are making more and more outlandish claims about the power of their content. It feels like a race to the bottom, but no matter how driven by dopamine we might be, as consumers, we aren’t stupid. If clicking that link doesn’t give us value, we will stop clicking. Often mocked content publishers Upworthy point out that although they optimise their headlines to the nth degree, it’s the quality of the content that is the real key to their success – people just really like it. After all, the real success of content marketing is getting shares, not just clicks. Yes, we might click to see what the story is actually about, but we will only share if it’s great content.

For me it seems a shame; great content shouldn’t have to rely on cheap tricks to get noticed. ‘Vaguebooking’, an intentionally vague Facebook update that is used to get attention, is regarded as a social media faux pas, purely because it is annoying. In the same way, this vague style of copywriting has an air of cheap desperation about it. It’s worth noting that, to my knowledge, this strategy hasn’t been employed by any ‘quality’ publishers such as the BBC, Guardian, Telegraph and so on. Maybe it does suit Buzzfeed’s latest list of funny cats, or a trashy opinion piece on the Huffington Post – but does it suit your brand? How you communicate plays a big role in how users perceive your brand.

In the long run, I don’t see the current trend for link-bait as being sustainable, because once the technique reaches critical mass and every publisher is doing it, publishers will no doubt move on from link-bait in order to get noticed again. Perhaps publishers will simply have made too many claims on how amazing their content is, and consumers will stop believing it and clicking. Ultimately, link-bait is just one way to ensnare our interest for a brief moment, in a world where attention spans are forever getting shorter.

Whatever happens, great content will always be shared and enjoyed. In the meantime, look out for those vague links in your Facebook feed, and see which ones get your dopamine levels buzzing.

Posted in copywriting, social media, UX | Leave a comment

AI: horror or delight?

The UX of AI

Artificial intelligence (AI), as you no doubt already know, is an umbrella term for intelligent software and computing. For software to be deemed intelligent, it needs to be capable of things like reasoning, knowledge, planning, learning, communication and perception; in essence, it needs to be more human. As computers get ever faster, and we move another step closer to technological singularity, AI is becoming more and more commonplace and achievable, even within modest budgets. But as AI slowly takes over, how do people feel about this ‘new’ technology, and what impact is it having on user experience?

Don’t mention the war

In some research carried out on an app idea we’d developed, users were put off by AI, the app’s killer feature. ‘It learns about you’, they were told, ‘and based on that learning, it gives you great suggestions’. We were unlucky that we didn’t get to present the work ourselves, and perhaps we would have worded it rather more sensitively, but nonetheless, the users were turned off by the ‘learning’ part of the equation. They said that the thought of it learning about them was a bit creepy and it might put them off using the app. So, with that, another idea was confined to the shelf of history.

Of course, every day we use services that do learn about us, but for the most part we just don’t realise it, or if we do know, we don’t care because we’ve already been sold by the benefits. If we knew how other services we use learn about us, maybe we’d call them creepy too. But, as consumers, we don’t buy things for the way that they work, we buy products – products that we believe will improve our lives. To a user, Facebook is a space to share things with your friends, not a complex series of machine learning techniques that attempts to mimic how your brain works. It’s actually both things, my point being that the perceived value of your product is more important than how it works, or in simpler terms: it’s the sizzle, not the sausage.

We were all disappointed with our app idea being shelved, as we knew we’d developed a great concept, but wondered if the focus of the research had been on the wrong thing, the machine learning and not the benefits of using the app. For me, the big surprise was how sceptical people seemed to feel about knowingly letting artificial intelligence into their lives, and how they instinctively seemed to distrust it.

The shock of the new

It doesn’t help that our cultural expectations of artificial intelligence and intelligent computers are almost completely negative, from ‘Skynet’ becoming self-aware and ending the world in The Terminator, to HAL 9000 going haywire in 2001: A Space Odyssey. AI has a point to prove if it wants to stop being the go-to bad guy in science fiction films, and instead be seen as a force for good in the world – but I’m not sure that advertisers are helping the situation. Every day, the news is full of stories of advertisers coming up with more ways of automatically profiling people without their consent (a technique referred to as ambient personalisation). From bins that track people’s movements by their smartphones and supermarkets using automated facial recognition on their customers, to banner ads chasing us around the web, a debate is raging about how people can opt out of this endless profiling and regain their right to anonymity. The joke is often on marketeers who, despite often talking about clever use of ‘Big Data’, so often get it wrong and end up delivering adverts or recommendations of little or no real relevance to the consumer. By this delivery of minimal value through controversial means, the public are having their confidence in ‘intelligent’ technology ever further undermined.

Above all, we need to prove to consumers that AI is useful, and stop freaking them out with it. At World Usability Day last month, Oli Shaw (@olishaw) did a talk entitled ‘You don’t know me better than I know myself’, which focused on the negative aspects of ambient personalisation and machine learning. He talked a lot about an invisible line in the sand with ‘advice’ at one end and ‘parenting’ at the other. ‘Advice’ being useful and helpful, and ‘parenting’ being a frustrating experience where limits are set on what you can do. In order to deliver a good user experience, we need to keep on the ‘advice’ side of the equation and keep the user in control of what is happening. AI and ambient personalisation can provide a useful service, but this can easily stray into territory that feels more like a creepy stalker. Perhaps his most surprising example was how Target, a consumer profiling company, worked out that a teen girl was pregnant before her father did. Ironically, this rather unnerving story is a good example of an advertiser delivering targeted ads via AI, but to consumers this feels like the worst possible ‘Big Brother’ scenario.

Oli also went on to discuss the problem of living in a ‘filter bubble’ – the resultant state of an over-personalised web experience. By having our news feeds and search results constantly tailored to our preferences with AI, we can miss out on experiencing things outside our bubble. Paradoxically, one of the things I appreciate when flicking through traditional printed media like a newspaper is that I get exposed to things I’m not necessarily interested in or perhaps don’t even agree with; it helps give me a rounded view of the world. When the newspaper only shows us the things that we are already interested in, we miss out on discovering new things through our blinkered view.

A force for good?

If clever technology like AI is so scary, and doesn’t even guarantee to deliver value for the user, then why bother? Well, because like anything, it’s got the potential to be really useful if used in the right way. Like all good design, good AI doesn’t draw attention to itself, it just works, and when it works well it delights the user.

I recently got a new phone, and after setting it up I noticed something interesting. Handily, all the photos from my old phone were there, via an automated cloud backup, but some of them had a little magic wand icon on them. On closer inspection, these were all new photos, created from my existing images, but some had the exposure corrected and contrast tweaked, others had been merged into a panorama, arranged into a grid and some even turned into animated gifs. They were brilliant, and it was the perfect example of giving a little bit more to delight your customers. This magic had been done by ‘Auto Awesome‘, a feature that uses Google’s computer vision and machine learning technologies to automatically understand similarities in photos, decipher their context, and act on this understanding by creating new images that it thinks the user might like.

IMG_20130518_145011-MOTIONExample of an ‘auto-awesomed’ photo sequence

Other great examples of AI in action include the Facebook news feed which uses machine learning to trim the 1,500 updates that the average Facebook user could see down to the 30–60 updates deemed most appropriate to the user. Without this use of AI, Facebook would be a mess and would fail to deliver value to its users. Facebook edits what you see based on what it knows about you, but crucially, it also keeps the user in control by providing options to edit who or what you see in your news feed. Automatic facial recognition within Facebook photos makes common tasks, such as tagging friends, quicker and easier for users, again achieved using AI.

These examples show that AI can improve user experience, by responding to users’ needs and making time-consuming tasks quicker and easier. AI is all around us, we just don’t often realise it. It’s in speech recognition, translation, video games, social networks, search engines, and many other things that affect our daily lives. It’s even controlling the stock exchange, and helping airports run safely. In many cases, AI outperforms the human brain, as it’s less likely to make mistakes. Intelligent computing is only going to become more important and prevalent in our lives, so it’s important that we use it in the right way: to make sure it helps, rather than stifles us.

3 simple rules for great AI

Thinking about the morals of AI reminded me of The Three Laws of Robotics, devised by the science fiction author Isaac Asimov. Written in 1942, they are an early attempt at dealing with the moral implications of intelligent automated machines. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Inspired by this, I’ve devised my own three rules of good AI design:

  1. Always get permission from the user: in order to build trust, data should be voluntarily given up by the user, otherwise it may feel like they are being watched or spied on. In order to gain permission from the user, you need to explain the benefits of them doing so. It should be a mutual contract between the user and the software. For example, if I provide Google Now with my location, it’ll show me cool stuff nearby. As people want more for less in their lives, the incentive should be in the benefits of your product.
  2. Always keep the user in control: allow your users to break out of their filter bubbles, keep them in control and offer them choice. Always remember to offer useful advice through your service, but don’t stray over the line into ‘parenting’ by patronising, dictating to, or obscuring information or options from the user. Sometimes even just the illusion of control might be enough.
  3. Always deliver value for the user: good AI delights by offering more in unexpected ways. Keep your side of the contract with the user by delivering real value for them through the data you collect. Enhance user experience on the fly by being aware of the user’s context and ever-changing needs.

As an industry, if we all design with these simple rules in mind, AI will cement its self as something that the public trust and love, rather than something they fear.

As a final thought, here’s a glimpse into the future. The image below shows what Google’s self-driving car can ‘see’, as it processes 1GB of data every second. Proof we are living in the future…

James 3

Posted in AI, Futurology, UX | Leave a comment

Touchless interaction – why the future is about letting go

Touchless interaction

It seems like just the other day we were discussing the move away from the mouse to the touchscreen, but such is the current pace of technological change and innovation, that now the talk is of a ‘post touch screen’ world. New technologies are pushing the boundaries of what is possible without touching or clicking an interface – this gives us the opportunity to create more natural-feeling interactions than ever before, in scenarios that wouldn’t have previously been possible.

This blog post outlines some of the main technologies and interactions on offer, and how we can all benefit from them today.

From GUI to NUI

Since the dawn of computing we have had to adapt our natural human behaviour in order to interact with computers. Hunching over and poking at buttons on our glowing mobile and desktop computer screens, interacting with a graphical user interface (GUI) has become synonymous with modern life.

But the GUI is a workaround, albeit a very clever one, that enables us to control a computing device. It’s a compromise between how a computing system works and what a human can easily understand; by abstracting the inner workings of a system into a set of buttons and other controls, we can command it to do what we want it to do. As much as interaction designers like myself try to ease the pain by designing user-friendly GUIs, controlling a computing device using a GUI still feels less natural than other activities in our day.

But what if computers and their software were designed to fit in more with natural human behaviour, rather than the other way around? For years this hasn’t really been possible, largely because of the limitations of technology, but now a new generation of sensors and intelligent software means that natural interactions are more possible than ever, and we no longer need to rely solely on touching or clicking a GUI on a screen.

Such seamless interactions are often referred to as natural user interfaces (NUIs), and they could just sound the death knell for the GUI.

Can’t touch this

Touchless interaction doesn’t just open up the possibilities for more natural interaction however, it also provides a way of interacting during those times when you can’t touch a GUI – perhaps you have messy hands in the kitchen, are driving a car, holding a baby or are a surgeon carrying out an operation. With touchless interfaces we are now able to control a device and enjoy its benefits in nearly any scenario, many of which would not have been practical before. It also means that we can now interact with devices that have very small screens (where GUI interaction is particularly problematic), or even devices that have no screen at all. From talking to smart watches to making physical hand gestures at your TV, it’s the dawn of a new age of computing, where connected devices are even more integrated into our lives.

So how can we control a device without touching or clicking it? Witchcraft? Smoke and mirrors?

Most of the methods available involve sensors that are already built into modern smartphones, such as the camera, microphone, accelerometer, gyroscope, and wireless communication technologies such as Bluetooth or NFC. Below is a rundown of four key opportunities for interacting with a device without touching or clicking it, and some examples of their implementation.

1. Voice

Talking – it’s one of the main methods that we use to communicate with each other, so interacting with a device using your voice seems a natural step for software designers. Human speech and language is highly complex and hard for a machine to interpret, so although it’s existed in its infancy for a long time, it’s only relatively recently that the technology has become a serious contender for touchless interaction.

Siri_750wWhen Apple acquired Siri and built it into the iPhone 4S, voice interaction came of age – for the first time, people seriously considered talking to their computing devices instead of touching them. What’s particularly interesting about Siri is that it uses a natural language user interface, allowing the user to have a conversation with the device. Apple gives the following example: ‘When you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighbourhood.’ It allows for a more natural, conversational interaction. Although Siri isn’t perfect, it’s made huge inroads into that tricky beast: intelligent software capable of understanding human language.

Siri and other natural language interfaces such as Google Now move our relationship with computers away from typing long search queries and endless button clicking, and replace it with the kind of conversations that we might have with each other.

Voice control can be particularly useful in certain scenarios, for example, driving a car (Apple is working with various car manufacturers to integrate Siri into cars), in the kitchen when your hands are mucky (The Nigella Lawson recipe iPad app allows you to move through ingredients and recipe instructions with voice commands). As anyone who has used a touchscreen keyboard knows, it’s hard work and it’s easy to make mistakes, so voice input can offer a welcome relief from keyboard input – providing that it works flawlessly.

On the web, HTML5 introduces the Web Speech API, technology giving modern web browsers the capability of speech recognition. Combined with voice output from the system (iOS7 features the Web Speech Synthesis API), it’s now possible for a user to have a conversation with a website. The possibilities are endless, particularly on mobile, and uses like language translation and dictation are just the start.

The other growing importance of voice control is more extreme. As technology is getting ever smaller and more disseminated around our homes, there’s a need to control it in new ways. A device may be too small to have a screen or keyboard, and buttons may be impractical. For instance, Google Glass can be controlled through voice commands, which is, in theory, easier than fiddling around with the tiny buttons that live on the side of the device. Through the voice command ‘OK Glass’, the device starts listening and the user can then use natural language voice interactions to control the device. Google have also employed this ‘always listening’ feature in a recent phone, the Moto X, which can be woken up from sleep and used freely with voice commands, without the user ever going near the device, as demonstrated in this advert. The growing trend for smart watches such as the Samsung Galaxy Gear and long-rumoured Apple iWatch further underlines how technology is getting ever smaller, and requires ever more sophisticated voice controls.

Voice-based interaction does have one downside which I feel is a barrier – using it in public. Personally, I feel very self-conscious telling my phone what to do, but I’m sure this is something that will change as it becomes more and more common.

2. Accelerometer and gyroscope

ShakeTwo sensors that are built into nearly all smartphones and tablets are the three-axis accelerometer and the gyroscope, which can be used to detect rotation and movement of the device. Usually used to switch between portrait and landscape mode based on screen orientation, they also have the potential for touchless interaction. One example is the Google Maps mobile app – if you angrily shake your phone whilst using the app, it opens up a dialogue box asking if you would like to give feedback – presuming that something went wrong. iOS has several accelerometer-based interactions built into the operating system, including ‘shake to undo’, where a shake movement gives the option of undoing the last thing you typed.Good gestures feel natural, as they relate to things that we do already, such as shaking your phone in frustration when it’s not working, or resetting something with a shake. More subtle uses of the accelerometer and gyroscope can also feel like a natural way of controlling it, e.g. tilting and rotating your device to steer in a driving game. Parallax.js reacts to the orientation of your device, allowing for tilt-based interactions in the web browser.

3. Cameras and motion sensors

Another opportunity for touchless interaction is by using imaging sensors such as a camera to interpret the world around the device. If a device can ‘see’, then it can offer new modes of interaction, such as physical gestures, motion tracking and facial recognition.

Facial recognition can be used as a security feature, eliminating the need for passwords and making unlocking a phone faster and more natural – just look at your phone and you can use it, no need for a passcode (although both this and Touch ID fingerprint scanners have been shown to be vulnerable to hackers). Samsung released a myriad of (albeit gimmicky) hands-free interactions with the Galaxy S4, including: ‘Eye Scroll’ (eye tracking technology that allows the user to scroll through a page just by moving their eyes), ‘Tilt Scroll’ (scroll a page by tilting the device), and ‘Air Gestures’ (control the phone using hand gestures). Whilst playing a video on the device, looking away from the screen can automatically pause the video you are watching, then resume it when you look back.

smart tv
In the living room, Smart TVs are getting smarter by offering a whole host of touchless interactions. No longer do you need to hunt for the remote, instead the built-in camera recognises your face and logs you in to your profile (giving you instant access to your favourite content). Hand gestures can then control TV functions by swiping to navigate and grabbing to select, finally a voice command turns your TV off when you are done. It might sound like something from Tomorrow’s World, but amazingly it’s all possible from a £500 TV you can buy on the high street today.

Much of this technology was first brought into our homes by games consoles such as the Nintendo Wii or Microsoft Kinect for Xbox 360. The Microsoft Kinect features a combination of 3D depth sensors, RGB camera, and a multi-array mic, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. These allow the user to control and interact with it without touching a games controller. It was a game-changer that has brought touchless interaction into 24 million living rooms.

The Kinect hasn’t just been great for the living room however, its software development kit (SDK) has allowed programmers to use the Kinect’s sensors for a wide variety of scenarios outside home entertainment. ‘Kinect Hacks’ are posted almost daily on the internet, and range from art installations, music videos, controlling a robot, live musical performance, to vascular surgery. A thriving community like the Kinect ‘hacking’ community helps showcase the possibilities of touchless interaction.

PrimeSense, the makers of the Kinect’s motion capture sensors, released an interesting video (warning: very cheesy) to showcase Capri, their next generation of 3D sensors, which they claim will bring precise, fast motion tracking to everything – from laptops and TVs to elevators, robots and appliances everywhere. It looks like ‘Kinect Hacks’ are going mainstream.

Leap Motion is a 3D motion control device that purports to have 200 times greater sensitivity than Kinect. It’s a tiny device that can be plugged into any computer with a USB connection and enables touchless gesture interactions on the desktop, similar to the Kinect. We recently got one at The Real Adventure – it’s amazingly accurate and intuitive, and has real applications (beyond the initial sensation that you are Harry Potter or in Minority Report) and its own ‘app store’ full of tools and games to play around with. Autodesk have created a plugin so that you can use the Leap Motion to control Maya, a piece of industry-standard 3D graphics and animation software. By using the Leap’s sophisticated 3D sensors, you can manipulate 3D shapes on the screen using your hands, in a way reminiscent of a traditional model maker or sculptor – a natural interaction with all the benefits of using computer software (the ‘undo’ button being one).

Other motion controllers include: The MYO armband, which lets you use the electrical activity in your muscles to wirelessly control your computer and other devices. Haptix is a sensor that turns any flat surface into a 3D multitouch surface. Although not technically a touchless interaction, it’s another example of how we are moving away from traditional screens.

By giving a computer ‘eyes’, cameras and other motion sensors offer new types of interactions, from face recognition to gesture control. They open up the opportunity of interaction in new spaces, and provide a truly personalised experience. The next step will likely be a better understanding of context from the computer, by understanding who you are, where you are and what you are doing, and maybe even the emotions you are experiencing, computers will be able to use this contextual information to automatically adapt to our needs.

4. Bluetooth and NFC

Wireless communication technologies such as Bluetooth and Near Field Communication (NFC) allow for interactions based on proximity and location. For a long time now, we’ve been promised both jetpacks and contactless payments through our mobile phones. Like the personal jetpack, it’s never quite happened – but it’s close. Whereas countries like Japan have been embracing NFC-based payments for years, it’s never quite caught on in the West, partly because Apple has never incorporated NFC into the iPhone. But now, Bluetooth low energy (BLE) is being touted as the next big thing for contactless payments. By using BLE beacon transmitters and geo-fencing, consumers can be identified, along with their precise location. Marketed by Apple as iBeacon, BLE chips ship in most modern phones, so the support should be much wider than NFC, and it might just change the way we do commerce on the high street. Instead of queuing at the till to pay, the future may see our accounts automatically getting debited as we leave the store, and loyalty rewards offered based on our location history. Many see commerce without tills as the ultimate touchless interaction. Walk into a shop, order a sandwich, eat, walk out. No fiddling around with card machines and long queues for the checkout, as PayPal’s new Beacon solution demonstrates.

Bump, recently acquired by Google, is an app that allows two smartphone users to physically bump their phones together to transfer contact information, photos, and files to each other over the Internet. When bumping their phones, software sends a variety of sensor data to an algorithm running on Bump servers, which includes the location of the phone, accelerometer readings, IP address, and other sensor readings. The Bump algorithm figures out which two phones felt the same physical bump and then transfers the information between those phones. Bump makes transfers through software, while devices with Near Field Communication (NFC) chips transfer data through software and hardware. Its an impressively complex use of various technologies which all work together in sequence to deliver a really simple, touchless interaction for the user.

Wireless communication technologies are touchless by nature, but often activating them at the right time is the clumsy bit. Technologies like Bluetooth LE and NFC allow for data to be securely exchanged and managed behind the scenes, leaving people to get on with their lives. Exchanging files can become quick and seamless, and in retail, long queues can become a thing of the past with loyalty properly rewarded, improving customer retention.

Wrapping up

So there you have it, four methods that we can use to create touchless interactions: voice; accelerometer and gyroscope; camera; and wireless communication. By replacing or enhancing our graphical user interfaces with touchless interactions we can make our products and services easier to use, feel more natural and become available in new environments, such as the living room or in the car. As with anything new there are mistakes to avoid, for instance Samsung’s touchless innovations on the phone feel more like gimmicks than solution to any real user need. A touchless interaction needs to be solving a problem, and enhancing a product or service, rather than complicating it. But where they employed well, such as in the Bump app, touchless interactions make a product or service more pleasant to use, and therefore make the consumer more likely to use it.

Now, why not use your Leap Motion to interact with the sharing buttons below, or leave a comment using your browser’s voice input feature?

Posted in apps, Futurology, mobile, technology, UI, Usability, UX, wearable | Leave a comment

Ten tips for successful Calls to Action

Calls to Action best practices

How to win friends and influence people

The ‘Call to Action’ (or CTA), is a ubiquitous marketing term used to describe a graphic or piece of text that prompts a user to take a desired next step. In print, a CTA could be a graphic at the bottom of a piece of direct marketing inviting the reader to dial a Freephone number. In the digital world, Calls to Action are more often than not buttons that prompt the user to interact with them some way.

Among the digital marketer’s closest allies, the humble CTA button is relied on time and time again for mission critical purposes such as enticing users to find out more information about a service, join a CRM programme, donate to charity, or make a purchase. We’ve all fallen victim to a Call to Action at some point, clicking that ‘proceed to checkout’ button to buy some shiny new trainers, or tapping a ‘find out more’ banner and subsequently signing up for a newsletter. The success of a CTA is often closely scrutinised – and rightly so for such a key tool in the digital world – so it’s important to understand what makes a CTA a success or failure.

Continuing on from my article on link usability from a few months back, I’ve dug out some research and added my own thoughts to create a set of guidelines for making your CTA buttons effective.

1. Maintain a clear hierarchy

It’s important to give your Call to Action the weighting that it deserves and make it stand out on the page. This may sound obvious but all too often web pages use a single button style and important features end up getting lost amongst the page furniture. Simply put, a key button that you are measuring success against (such as ‘sign up’ or ‘checkout’) should never have the same weighting as the other buttons on the page.

Ideally, a webpage should have three distinct button styles, ranging from the most important, primary Call to Action, through to a slightly less important secondary button style, down to utility-style buttons that don’t need to stand out, but do need to exist on the page. Prominence can be given to buttons using a combination of size, white space, use of font and colour. The most prominent primary button style should be reserved for a single, key CTA per page – for example, Google uses a striking red button for ‘sign in’, while Twitter uses an eye-catching yellow lozenge for their ‘sign up for Twitter’ call to action.

Multiple CTAs with the same weighting on a page will compete with each other and fail to offer a clear path forward for the user, so it’s better to use a small number of distinct actions, and make it obvious to the user what you want them to do next.

Example of a clear CTA hierarchy in the Sold app

2. Supporting text is not a crutch

A clear Call to Action should function without the need to read all the text around it. Remember, users scan read the web, no one reads copy word for word, no matter how much effort you’ve put into writing it. So the wording of a CTA should make sense in isolation and explain what it links to. This is not only usability best practice, it also enhances your search engine ranking and improves the accessibility of your website.

3. Communicate value

Never make your visitors ask “What’s in it for me?”, because if they can’t implicitly see the reason for clicking your CTA, they’re unlikely to bother doing so. The Apple website’s product pages are a great example of how to successfully communicate value in CTAs. By adding a few more words, they clearly communicate to the user both what will happen when they click the Call to Action, and the value in doing so. So a link will say ‘Learn more about the design of the iPhone 5 >’ rather than simply ‘Learn more >’. Incidentally, these longer wordings have led to them using text links rather than buttons, but successful use of white space combined with colour keeps them prominent.

There is, of course, a balance to be made between concise wording and communicating value or explaining what will happen next. It’s always best to be as succinct as possible; too much wording will make your CTAs harder to scan read, less punchy and will affect conversion rates.

A clear Call to Action that communicates value on the Apple website

4. Clarity

Copywriting for the web is a real art, and writing the perfect Call to Action is a huge challenge. CTAs need to be succinct, but most of all they need to be clear. There can be a temptation to take the button copywriting down an emotional, storytelling route that follows a brand’s tone of voice, but I’d recommend leaving this style of writing for the supporting copy. Instead, keep Call to Action wording as straight forward and as clear as possible. Users need clarity about what exactly will happen when a button is clicked, or it will lead to confusion and a sub-par user experience.

We highlighted this issue in some recent usability research carried out here at The Real Adventure, which found that users get confused by CTA wording that didn’t explicitly sign post them to what they were looking for. Luckily, this was easily fixed, but it’s something we should all avoid for the future. This article also shows how small changes to CTA copy can improve clarity, and in turn positively effect conversion rates.

CTA Pic3
Example of unclear button wording, image sourced from UX matters (‘Label buttons with what they do’).

5. Verbs get the job done

Verbs are integral to writing successful CTA copy because they encourage users to take action. Strong action verbs such as ‘call’, ‘watch’, ‘buy’, ‘register’, ‘compare’, ‘shop’ and ‘download’ will grab people’s attention and encourage them to act. Examples of action verbs in CTA copy include ‘Register now’, ‘Watch the video’ or ‘Buy now for £19.99′.

6. The gentle art of persuasion

Sure, it’s not pretty, but there are times when it’s necessary to ratchet up the sales patter a little, and aggressively go after that sign up or purchase. An example might be on a pay-per-click landing page where you only have limited time to convince your visitor to do something, in which case you can employ various techniques to make your Call to Action more compelling and increase its conversion rate.

Words that increase a sense of urgency, immediacy, ease and value will all help here. Urgency can be achieved by generating tension and excitement for your reader, employing CTA wording such as ‘now’, ‘immediately’, ‘today’ or ‘quick’. Immediacy can be reinforced by placing emphasis on time and availability, so you can stress that an offer is ‘for a limited time only’, ‘ends in 2 hours’, or that there are ‘only 5 items left’.

An emphasis on ease might mean spelling out that signing up for an account ‘only takes 60 seconds’, alleviating users’ concerns that it will be difficult or time consuming. Showing value could mean placing emphasis on a service being free or on special offer, removing perceptions of it being costly. By creating a sense of urgency and relieving users’ concerns, they should be more inclined to take action.

7. The secondary nudge

A secondary line of copy, positioned close to your CTA, can support your primary message. It can be used to alleviate users’ concerns about a service (as highlighted in point 6), or to feature a benefit that will encourage conversion. But as with any web copy, there’s no guarantee that it will be read due to how people scan read, so it should never be replied on. It can also add to the visual clutter of the page, so you need to be sure it’s definitely adding value. That said, a simple line under your CTA like ‘it’s free and only takes 30 seconds’ could be the difference between conversion and abandonment.

foursquare-supporting-line (2)
A line of copy to support a ‘Sign up with Facebook’ CTA on the Foursquare website

8. Icons can help

Icons can enhance a CTA by communicating a bit of extra information. A right-pointing chevron or arrow can help increase urgency and imply a positive forward movement, a left arrow implies going back, a padlock icon suggests a secure process, and a cog; some ‘under the bonnet tinkering’. In nearly all cases, graphics shouldn’t be replied on, they only exist to enhance CTA text.

CTA Pic4
A secure padlock icon as used on the Lloyds TSB website

9. Don’t disappoint

Once the user has acted on your CTA, make sure you deliver whatever you promised them and carry it through, otherwise you’ll lose them. Fast.

10. Test, measure and refine

The only way to really understand which design and word combinations yield the best results is to try them out on your audience using A/B and multivariate testing to see what works. Then test and refine again, an iterative approach will pay off in the long term. See the whichtestwon website for examples of testing strategy and results.

Sell, sell, sell!

So there you go – ten ways to make your CTAs more user-friendly. There are exceptions to every rule, but I hope that you find these guidelines useful. Now it’s over to you to go and create that million dollar CTA button. If you’ve got any more tips for creating great Calls to Action, please share them in the comments below.

Posted in signage, strategy, UI, UX | Leave a comment