AI: horror or delight?

The UX of AI

Artificial intelligence (AI), as you no doubt already know, is an umbrella term for intelligent software and computing. For software to be deemed intelligent, it needs to be capable of things like reasoning, knowledge, planning, learning, communication and perception; in essence, it needs to be more human. As computers get ever faster, and we move another step closer to technological singularity, AI is becoming more and more commonplace and achievable, even within modest budgets. But as AI slowly takes over, how do people feel about this ‘new’ technology, and what impact is it having on user experience?

Don’t mention the war

In some research carried out on an app idea we’d developed, users were put off by AI, the app’s killer feature. ‘It learns about you’, they were told, ‘and based on that learning, it gives you great suggestions’. We were unlucky that we didn’t get to present the work ourselves, and perhaps we would have worded it rather more sensitively, but nonetheless, the users were turned off by the ‘learning’ part of the equation. They said that the thought of it learning about them was a bit creepy and it might put them off using the app. So, with that, another idea was confined to the shelf of history.

Of course, every day we use services that do learn about us, but for the most part we just don’t realise it, or if we do know, we don’t care because we’ve already been sold by the benefits. If we knew how other services we use learn about us, maybe we’d call them creepy too. But, as consumers, we don’t buy things for the way that they work, we buy products – products that we believe will improve our lives. To a user, Facebook is a space to share things with your friends, not a complex series of machine learning techniques that attempts to mimic how your brain works. It’s actually both things, my point being that the perceived value of your product is more important than how it works, or in simpler terms: it’s the sizzle, not the sausage.

We were all disappointed with our app idea being shelved, as we knew we’d developed a great concept, but wondered if the focus of the research had been on the wrong thing, the machine learning and not the benefits of using the app. For me, the big surprise was how sceptical people seemed to feel about knowingly letting artificial intelligence into their lives, and how they instinctively seemed to distrust it.

The shock of the new

It doesn’t help that our cultural expectations of artificial intelligence and intelligent computers are almost completely negative, from ‘Skynet’ becoming self-aware and ending the world in The Terminator, to HAL 9000 going haywire in 2001: A Space Odyssey. AI has a point to prove if it wants to stop being the go-to bad guy in science fiction films, and instead be seen as a force for good in the world – but I’m not sure that advertisers are helping the situation. Every day, the news is full of stories of advertisers coming up with more ways of automatically profiling people without their consent (a technique referred to as ambient personalisation). From bins that track people’s movements by their smartphones and supermarkets using automated facial recognition on their customers, to banner ads chasing us around the web, a debate is raging about how people can opt out of this endless profiling and regain their right to anonymity. The joke is often on marketeers who, despite often talking about clever use of ‘Big Data’, so often get it wrong and end up delivering adverts or recommendations of little or no real relevance to the consumer. By this delivery of minimal value through controversial means, the public are having their confidence in ‘intelligent’ technology ever further undermined.

Above all, we need to prove to consumers that AI is useful, and stop freaking them out with it. At World Usability Day last month, Oli Shaw (@olishaw) did a talk entitled ‘You don’t know me better than I know myself’, which focused on the negative aspects of ambient personalisation and machine learning. He talked a lot about an invisible line in the sand with ‘advice’ at one end and ‘parenting’ at the other. ‘Advice’ being useful and helpful, and ‘parenting’ being a frustrating experience where limits are set on what you can do. In order to deliver a good user experience, we need to keep on the ‘advice’ side of the equation and keep the user in control of what is happening. AI and ambient personalisation can provide a useful service, but this can easily stray into territory that feels more like a creepy stalker. Perhaps his most surprising example was how Target, a consumer profiling company, worked out that a teen girl was pregnant before her father did. Ironically, this rather unnerving story is a good example of an advertiser delivering targeted ads via AI, but to consumers this feels like the worst possible ‘Big Brother’ scenario.

Oli also went on to discuss the problem of living in a ‘filter bubble’ – the resultant state of an over-personalised web experience. By having our news feeds and search results constantly tailored to our preferences with AI, we can miss out on experiencing things outside our bubble. Paradoxically, one of the things I appreciate when flicking through traditional printed media like a newspaper is that I get exposed to things I’m not necessarily interested in or perhaps don’t even agree with; it helps give me a rounded view of the world. When the newspaper only shows us the things that we are already interested in, we miss out on discovering new things through our blinkered view.

A force for good?

If clever technology like AI is so scary, and doesn’t even guarantee to deliver value for the user, then why bother? Well, because like anything, it’s got the potential to be really useful if used in the right way. Like all good design, good AI doesn’t draw attention to itself, it just works, and when it works well it delights the user.

I recently got a new phone, and after setting it up I noticed something interesting. Handily, all the photos from my old phone were there, via an automated cloud backup, but some of them had a little magic wand icon on them. On closer inspection, these were all new photos, created from my existing images, but some had the exposure corrected and contrast tweaked, others had been merged into a panorama, arranged into a grid and some even turned into animated gifs. They were brilliant, and it was the perfect example of giving a little bit more to delight your customers. This magic had been done by ‘Auto Awesome‘, a feature that uses Google’s computer vision and machine learning technologies to automatically understand similarities in photos, decipher their context, and act on this understanding by creating new images that it thinks the user might like.

IMG_20130518_145011-MOTIONExample of an ‘auto-awesomed’ photo sequence

Other great examples of AI in action include the Facebook news feed which uses machine learning to trim the 1,500 updates that the average Facebook user could see down to the 30–60 updates deemed most appropriate to the user. Without this use of AI, Facebook would be a mess and would fail to deliver value to its users. Facebook edits what you see based on what it knows about you, but crucially, it also keeps the user in control by providing options to edit who or what you see in your news feed. Automatic facial recognition within Facebook photos makes common tasks, such as tagging friends, quicker and easier for users, again achieved using AI.

These examples show that AI can improve user experience, by responding to users’ needs and making time-consuming tasks quicker and easier. AI is all around us, we just don’t often realise it. It’s in speech recognition, translation, video games, social networks, search engines, and many other things that affect our daily lives. It’s even controlling the stock exchange, and helping airports run safely. In many cases, AI outperforms the human brain, as it’s less likely to make mistakes. Intelligent computing is only going to become more important and prevalent in our lives, so it’s important that we use it in the right way: to make sure it helps, rather than stifles us.

3 simple rules for great AI

Thinking about the morals of AI reminded me of The Three Laws of Robotics, devised by the science fiction author Isaac Asimov. Written in 1942, they are an early attempt at dealing with the moral implications of intelligent automated machines. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Inspired by this, I’ve devised my own three rules of good AI design:

  1. Always get permission from the user: in order to build trust, data should be voluntarily given up by the user, otherwise it may feel like they are being watched or spied on. In order to gain permission from the user, you need to explain the benefits of them doing so. It should be a mutual contract between the user and the software. For example, if I provide Google Now with my location, it’ll show me cool stuff nearby. As people want more for less in their lives, the incentive should be in the benefits of your product.
  2. Always keep the user in control: allow your users to break out of their filter bubbles, keep them in control and offer them choice. Always remember to offer useful advice through your service, but don’t stray over the line into ‘parenting’ by patronising, dictating to, or obscuring information or options from the user. Sometimes even just the illusion of control might be enough.
  3. Always deliver value for the user: good AI delights by offering more in unexpected ways. Keep your side of the contract with the user by delivering real value for them through the data you collect. Enhance user experience on the fly by being aware of the user’s context and ever-changing needs.

As an industry, if we all design with these simple rules in mind, AI will cement its self as something that the public trust and love, rather than something they fear.

As a final thought, here’s a glimpse into the future. The image below shows what Google’s self-driving car can ‘see’, as it processes 1GB of data every second. Proof we are living in the future…

James 3

Posted in AI, Futurology, UX | Leave a comment

Touchless interaction – why the future is about letting go

Touchless interaction

It seems like just the other day we were discussing the move away from the mouse to the touchscreen, but such is the current pace of technological change and innovation, that now the talk is of a ‘post touch screen’ world. New technologies are pushing the boundaries of what is possible without touching or clicking an interface – this gives us the opportunity to create more natural-feeling interactions than ever before, in scenarios that wouldn’t have previously been possible.

This blog post outlines some of the main technologies and interactions on offer, and how we can all benefit from them today.

From GUI to NUI

Since the dawn of computing we have had to adapt our natural human behaviour in order to interact with computers. Hunching over and poking at buttons on our glowing mobile and desktop computer screens, interacting with a graphical user interface (GUI) has become synonymous with modern life.

But the GUI is a workaround, albeit a very clever one, that enables us to control a computing device. It’s a compromise between how a computing system works and what a human can easily understand; by abstracting the inner workings of a system into a set of buttons and other controls, we can command it to do what we want it to do. As much as interaction designers like myself try to ease the pain by designing user-friendly GUIs, controlling a computing device using a GUI still feels less natural than other activities in our day.

But what if computers and their software were designed to fit in more with natural human behaviour, rather than the other way around? For years this hasn’t really been possible, largely because of the limitations of technology, but now a new generation of sensors and intelligent software means that natural interactions are more possible than ever, and we no longer need to rely solely on touching or clicking a GUI on a screen.

Such seamless interactions are often referred to as natural user interfaces (NUIs), and they could just sound the death knell for the GUI.

Can’t touch this

Touchless interaction doesn’t just open up the possibilities for more natural interaction however, it also provides a way of interacting during those times when you can’t touch a GUI – perhaps you have messy hands in the kitchen, are driving a car, holding a baby or are a surgeon carrying out an operation. With touchless interfaces we are now able to control a device and enjoy its benefits in nearly any scenario, many of which would not have been practical before. It also means that we can now interact with devices that have very small screens (where GUI interaction is particularly problematic), or even devices that have no screen at all. From talking to smart watches to making physical hand gestures at your TV, it’s the dawn of a new age of computing, where connected devices are even more integrated into our lives.

So how can we control a device without touching or clicking it? Witchcraft? Smoke and mirrors?

Most of the methods available involve sensors that are already built into modern smartphones, such as the camera, microphone, accelerometer, gyroscope, and wireless communication technologies such as Bluetooth or NFC. Below is a rundown of four key opportunities for interacting with a device without touching or clicking it, and some examples of their implementation.

1. Voice

Talking – it’s one of the main methods that we use to communicate with each other, so interacting with a device using your voice seems a natural step for software designers. Human speech and language is highly complex and hard for a machine to interpret, so although it’s existed in its infancy for a long time, it’s only relatively recently that the technology has become a serious contender for touchless interaction.

Siri_750wWhen Apple acquired Siri and built it into the iPhone 4S, voice interaction came of age – for the first time, people seriously considered talking to their computing devices instead of touching them. What’s particularly interesting about Siri is that it uses a natural language user interface, allowing the user to have a conversation with the device. Apple gives the following example: ‘When you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighbourhood.’ It allows for a more natural, conversational interaction. Although Siri isn’t perfect, it’s made huge inroads into that tricky beast: intelligent software capable of understanding human language.

Siri and other natural language interfaces such as Google Now move our relationship with computers away from typing long search queries and endless button clicking, and replace it with the kind of conversations that we might have with each other.

Voice control can be particularly useful in certain scenarios, for example, driving a car (Apple is working with various car manufacturers to integrate Siri into cars), in the kitchen when your hands are mucky (The Nigella Lawson recipe iPad app allows you to move through ingredients and recipe instructions with voice commands). As anyone who has used a touchscreen keyboard knows, it’s hard work and it’s easy to make mistakes, so voice input can offer a welcome relief from keyboard input – providing that it works flawlessly.

On the web, HTML5 introduces the Web Speech API, technology giving modern web browsers the capability of speech recognition. Combined with voice output from the system (iOS7 features the Web Speech Synthesis API), it’s now possible for a user to have a conversation with a website. The possibilities are endless, particularly on mobile, and uses like language translation and dictation are just the start.

The other growing importance of voice control is more extreme. As technology is getting ever smaller and more disseminated around our homes, there’s a need to control it in new ways. A device may be too small to have a screen or keyboard, and buttons may be impractical. For instance, Google Glass can be controlled through voice commands, which is, in theory, easier than fiddling around with the tiny buttons that live on the side of the device. Through the voice command ‘OK Glass’, the device starts listening and the user can then use natural language voice interactions to control the device. Google have also employed this ‘always listening’ feature in a recent phone, the Moto X, which can be woken up from sleep and used freely with voice commands, without the user ever going near the device, as demonstrated in this advert. The growing trend for smart watches such as the Samsung Galaxy Gear and long-rumoured Apple iWatch further underlines how technology is getting ever smaller, and requires ever more sophisticated voice controls.

Voice-based interaction does have one downside which I feel is a barrier – using it in public. Personally, I feel very self-conscious telling my phone what to do, but I’m sure this is something that will change as it becomes more and more common.

2. Accelerometer and gyroscope

ShakeTwo sensors that are built into nearly all smartphones and tablets are the three-axis accelerometer and the gyroscope, which can be used to detect rotation and movement of the device. Usually used to switch between portrait and landscape mode based on screen orientation, they also have the potential for touchless interaction. One example is the Google Maps mobile app – if you angrily shake your phone whilst using the app, it opens up a dialogue box asking if you would like to give feedback – presuming that something went wrong. iOS has several accelerometer-based interactions built into the operating system, including ‘shake to undo’, where a shake movement gives the option of undoing the last thing you typed.Good gestures feel natural, as they relate to things that we do already, such as shaking your phone in frustration when it’s not working, or resetting something with a shake. More subtle uses of the accelerometer and gyroscope can also feel like a natural way of controlling it, e.g. tilting and rotating your device to steer in a driving game. Parallax.js reacts to the orientation of your device, allowing for tilt-based interactions in the web browser.

3. Cameras and motion sensors

Another opportunity for touchless interaction is by using imaging sensors such as a camera to interpret the world around the device. If a device can ‘see’, then it can offer new modes of interaction, such as physical gestures, motion tracking and facial recognition.

Facial recognition can be used as a security feature, eliminating the need for passwords and making unlocking a phone faster and more natural – just look at your phone and you can use it, no need for a passcode (although both this and Touch ID fingerprint scanners have been shown to be vulnerable to hackers). Samsung released a myriad of (albeit gimmicky) hands-free interactions with the Galaxy S4, including: ‘Eye Scroll’ (eye tracking technology that allows the user to scroll through a page just by moving their eyes), ‘Tilt Scroll’ (scroll a page by tilting the device), and ‘Air Gestures’ (control the phone using hand gestures). Whilst playing a video on the device, looking away from the screen can automatically pause the video you are watching, then resume it when you look back.

smart tv
In the living room, Smart TVs are getting smarter by offering a whole host of touchless interactions. No longer do you need to hunt for the remote, instead the built-in camera recognises your face and logs you in to your profile (giving you instant access to your favourite content). Hand gestures can then control TV functions by swiping to navigate and grabbing to select, finally a voice command turns your TV off when you are done. It might sound like something from Tomorrow’s World, but amazingly it’s all possible from a £500 TV you can buy on the high street today.

Much of this technology was first brought into our homes by games consoles such as the Nintendo Wii or Microsoft Kinect for Xbox 360. The Microsoft Kinect features a combination of 3D depth sensors, RGB camera, and a multi-array mic, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. These allow the user to control and interact with it without touching a games controller. It was a game-changer that has brought touchless interaction into 24 million living rooms.

The Kinect hasn’t just been great for the living room however, its software development kit (SDK) has allowed programmers to use the Kinect’s sensors for a wide variety of scenarios outside home entertainment. ‘Kinect Hacks’ are posted almost daily on the internet, and range from art installations, music videos, controlling a robot, live musical performance, to vascular surgery. A thriving community like the Kinect ‘hacking’ community helps showcase the possibilities of touchless interaction.

PrimeSense, the makers of the Kinect’s motion capture sensors, released an interesting video (warning: very cheesy) to showcase Capri, their next generation of 3D sensors, which they claim will bring precise, fast motion tracking to everything – from laptops and TVs to elevators, robots and appliances everywhere. It looks like ‘Kinect Hacks’ are going mainstream.

Leap Motion is a 3D motion control device that purports to have 200 times greater sensitivity than Kinect. It’s a tiny device that can be plugged into any computer with a USB connection and enables touchless gesture interactions on the desktop, similar to the Kinect. We recently got one at The Real Adventure – it’s amazingly accurate and intuitive, and has real applications (beyond the initial sensation that you are Harry Potter or in Minority Report) and its own ‘app store’ full of tools and games to play around with. Autodesk have created a plugin so that you can use the Leap Motion to control Maya, a piece of industry-standard 3D graphics and animation software. By using the Leap’s sophisticated 3D sensors, you can manipulate 3D shapes on the screen using your hands, in a way reminiscent of a traditional model maker or sculptor – a natural interaction with all the benefits of using computer software (the ‘undo’ button being one).

Other motion controllers include: The MYO armband, which lets you use the electrical activity in your muscles to wirelessly control your computer and other devices. Haptix is a sensor that turns any flat surface into a 3D multitouch surface. Although not technically a touchless interaction, it’s another example of how we are moving away from traditional screens.

By giving a computer ‘eyes’, cameras and other motion sensors offer new types of interactions, from face recognition to gesture control. They open up the opportunity of interaction in new spaces, and provide a truly personalised experience. The next step will likely be a better understanding of context from the computer, by understanding who you are, where you are and what you are doing, and maybe even the emotions you are experiencing, computers will be able to use this contextual information to automatically adapt to our needs.

4. Bluetooth and NFC

Wireless communication technologies such as Bluetooth and Near Field Communication (NFC) allow for interactions based on proximity and location. For a long time now, we’ve been promised both jetpacks and contactless payments through our mobile phones. Like the personal jetpack, it’s never quite happened – but it’s close. Whereas countries like Japan have been embracing NFC-based payments for years, it’s never quite caught on in the West, partly because Apple has never incorporated NFC into the iPhone. But now, Bluetooth low energy (BLE) is being touted as the next big thing for contactless payments. By using BLE beacon transmitters and geo-fencing, consumers can be identified, along with their precise location. Marketed by Apple as iBeacon, BLE chips ship in most modern phones, so the support should be much wider than NFC, and it might just change the way we do commerce on the high street. Instead of queuing at the till to pay, the future may see our accounts automatically getting debited as we leave the store, and loyalty rewards offered based on our location history. Many see commerce without tills as the ultimate touchless interaction. Walk into a shop, order a sandwich, eat, walk out. No fiddling around with card machines and long queues for the checkout, as PayPal’s new Beacon solution demonstrates.

Bump
Bump, recently acquired by Google, is an app that allows two smartphone users to physically bump their phones together to transfer contact information, photos, and files to each other over the Internet. When bumping their phones, software sends a variety of sensor data to an algorithm running on Bump servers, which includes the location of the phone, accelerometer readings, IP address, and other sensor readings. The Bump algorithm figures out which two phones felt the same physical bump and then transfers the information between those phones. Bump makes transfers through software, while devices with Near Field Communication (NFC) chips transfer data through software and hardware. Its an impressively complex use of various technologies which all work together in sequence to deliver a really simple, touchless interaction for the user.

Wireless communication technologies are touchless by nature, but often activating them at the right time is the clumsy bit. Technologies like Bluetooth LE and NFC allow for data to be securely exchanged and managed behind the scenes, leaving people to get on with their lives. Exchanging files can become quick and seamless, and in retail, long queues can become a thing of the past with loyalty properly rewarded, improving customer retention.

Wrapping up

So there you have it, four methods that we can use to create touchless interactions: voice; accelerometer and gyroscope; camera; and wireless communication. By replacing or enhancing our graphical user interfaces with touchless interactions we can make our products and services easier to use, feel more natural and become available in new environments, such as the living room or in the car. As with anything new there are mistakes to avoid, for instance Samsung’s touchless innovations on the phone feel more like gimmicks than solution to any real user need. A touchless interaction needs to be solving a problem, and enhancing a product or service, rather than complicating it. But where they employed well, such as in the Bump app, touchless interactions make a product or service more pleasant to use, and therefore make the consumer more likely to use it.

Now, why not use your Leap Motion to interact with the sharing buttons below, or leave a comment using your browser’s voice input feature?

Posted in apps, Futurology, mobile, technology, UI, Usability, UX, wearable | Leave a comment

Ten tips for successful Calls to Action

Calls to Action best practices

How to win friends and influence people

The ‘Call to Action’ (or CTA), is a ubiquitous marketing term used to describe a graphic or piece of text that prompts a user to take a desired next step. In print, a CTA could be a graphic at the bottom of a piece of direct marketing inviting the reader to dial a Freephone number. In the digital world, Calls to Action are more often than not buttons that prompt the user to interact with them some way.

Among the digital marketer’s closest allies, the humble CTA button is relied on time and time again for mission critical purposes such as enticing users to find out more information about a service, join a CRM programme, donate to charity, or make a purchase. We’ve all fallen victim to a Call to Action at some point, clicking that ‘proceed to checkout’ button to buy some shiny new trainers, or tapping a ‘find out more’ banner and subsequently signing up for a newsletter. The success of a CTA is often closely scrutinised – and rightly so for such a key tool in the digital world – so it’s important to understand what makes a CTA a success or failure.

Continuing on from my article on link usability from a few months back, I’ve dug out some research and added my own thoughts to create a set of guidelines for making your CTA buttons effective.

1. Maintain a clear hierarchy

It’s important to give your Call to Action the weighting that it deserves and make it stand out on the page. This may sound obvious but all too often web pages use a single button style and important features end up getting lost amongst the page furniture. Simply put, a key button that you are measuring success against (such as ‘sign up’ or ‘checkout’) should never have the same weighting as the other buttons on the page.

Ideally, a webpage should have three distinct button styles, ranging from the most important, primary Call to Action, through to a slightly less important secondary button style, down to utility-style buttons that don’t need to stand out, but do need to exist on the page. Prominence can be given to buttons using a combination of size, white space, use of font and colour. The most prominent primary button style should be reserved for a single, key CTA per page – for example, Google uses a striking red button for ‘sign in’, while Twitter uses an eye-catching yellow lozenge for their ‘sign up for Twitter’ call to action.

Multiple CTAs with the same weighting on a page will compete with each other and fail to offer a clear path forward for the user, so it’s better to use a small number of distinct actions, and make it obvious to the user what you want them to do next.

sold-app-642px
Example of a clear CTA hierarchy in the Sold app

2. Supporting text is not a crutch

A clear Call to Action should function without the need to read all the text around it. Remember, users scan read the web, no one reads copy word for word, no matter how much effort you’ve put into writing it. So the wording of a CTA should make sense in isolation and explain what it links to. This is not only usability best practice, it also enhances your search engine ranking and improves the accessibility of your website.

3. Communicate value

Never make your visitors ask “What’s in it for me?”, because if they can’t implicitly see the reason for clicking your CTA, they’re unlikely to bother doing so. The Apple website’s product pages are a great example of how to successfully communicate value in CTAs. By adding a few more words, they clearly communicate to the user both what will happen when they click the Call to Action, and the value in doing so. So a link will say ‘Learn more about the design of the iPhone 5 >’ rather than simply ‘Learn more >’. Incidentally, these longer wordings have led to them using text links rather than buttons, but successful use of white space combined with colour keeps them prominent.

There is, of course, a balance to be made between concise wording and communicating value or explaining what will happen next. It’s always best to be as succinct as possible; too much wording will make your CTAs harder to scan read, less punchy and will affect conversion rates.

apple-642px-wide
A clear Call to Action that communicates value on the Apple website

4. Clarity

Copywriting for the web is a real art, and writing the perfect Call to Action is a huge challenge. CTAs need to be succinct, but most of all they need to be clear. There can be a temptation to take the button copywriting down an emotional, storytelling route that follows a brand’s tone of voice, but I’d recommend leaving this style of writing for the supporting copy. Instead, keep Call to Action wording as straight forward and as clear as possible. Users need clarity about what exactly will happen when a button is clicked, or it will lead to confusion and a sub-par user experience.

We highlighted this issue in some recent usability research carried out here at The Real Adventure, which found that users get confused by CTA wording that didn’t explicitly sign post them to what they were looking for. Luckily, this was easily fixed, but it’s something we should all avoid for the future. This article also shows how small changes to CTA copy can improve clarity, and in turn positively effect conversion rates.

CTA Pic3
Example of unclear button wording, image sourced from UX matters (‘Label buttons with what they do’).

5. Verbs get the job done

Verbs are integral to writing successful CTA copy because they encourage users to take action. Strong action verbs such as ‘call’, ‘watch’, ‘buy’, ‘register’, ‘compare’, ‘shop’ and ‘download’ will grab people’s attention and encourage them to act. Examples of action verbs in CTA copy include ‘Register now’, ‘Watch the video’ or ‘Buy now for £19.99′.

6. The gentle art of persuasion

Sure, it’s not pretty, but there are times when it’s necessary to ratchet up the sales patter a little, and aggressively go after that sign up or purchase. An example might be on a pay-per-click landing page where you only have limited time to convince your visitor to do something, in which case you can employ various techniques to make your Call to Action more compelling and increase its conversion rate.

Words that increase a sense of urgency, immediacy, ease and value will all help here. Urgency can be achieved by generating tension and excitement for your reader, employing CTA wording such as ‘now’, ‘immediately’, ‘today’ or ‘quick’. Immediacy can be reinforced by placing emphasis on time and availability, so you can stress that an offer is ‘for a limited time only’, ‘ends in 2 hours’, or that there are ‘only 5 items left’.

An emphasis on ease might mean spelling out that signing up for an account ‘only takes 60 seconds’, alleviating users’ concerns that it will be difficult or time consuming. Showing value could mean placing emphasis on a service being free or on special offer, removing perceptions of it being costly. By creating a sense of urgency and relieving users’ concerns, they should be more inclined to take action.

7. The secondary nudge

A secondary line of copy, positioned close to your CTA, can support your primary message. It can be used to alleviate users’ concerns about a service (as highlighted in point 6), or to feature a benefit that will encourage conversion. But as with any web copy, there’s no guarantee that it will be read due to how people scan read, so it should never be replied on. It can also add to the visual clutter of the page, so you need to be sure it’s definitely adding value. That said, a simple line under your CTA like ‘it’s free and only takes 30 seconds’ could be the difference between conversion and abandonment.

foursquare-supporting-line (2)
A line of copy to support a ‘Sign up with Facebook’ CTA on the Foursquare website

8. Icons can help

Icons can enhance a CTA by communicating a bit of extra information. A right-pointing chevron or arrow can help increase urgency and imply a positive forward movement, a left arrow implies going back, a padlock icon suggests a secure process, and a cog; some ‘under the bonnet tinkering’. In nearly all cases, graphics shouldn’t be replied on, they only exist to enhance CTA text.

CTA Pic4
A secure padlock icon as used on the Lloyds TSB website

9. Don’t disappoint

Once the user has acted on your CTA, make sure you deliver whatever you promised them and carry it through, otherwise you’ll lose them. Fast.

10. Test, measure and refine

The only way to really understand which design and word combinations yield the best results is to try them out on your audience using A/B and multivariate testing to see what works. Then test and refine again, an iterative approach will pay off in the long term. See the whichtestwon website for examples of testing strategy and results.

Sell, sell, sell!

So there you go – ten ways to make your CTAs more user-friendly. There are exceptions to every rule, but I hope that you find these guidelines useful. Now it’s over to you to go and create that million dollar CTA button. If you’ve got any more tips for creating great Calls to Action, please share them in the comments below.

Posted in signage, strategy, UI, UX | Leave a comment

Tour de Creston 2013

Tour De Creston

Last Friday, myself and a band of fellow Creston employees embarked on an epic 185 mile cycling adventure from Bristol to London, as part of the inaugural Tour de Creston.

Tour de Creston - setting off

The Grand Départ was from One Glass Wharf, home of The Real Adventure and EMO, where cyclists of all abilities – from occasional commuters to competitive rouleurs – gathered to take on the challenge. We were cheered on by a crowd of supportive colleagues outside the office, a theme that ran throughout the tour, with Creston employees helping each other along the whole way.

The sun was shining as we set off and we began the tour in relaxed style on the easy-going Bristol and Bath Railway Path. After briefly getting lost trying to find the route out of Bath (thanks Pete) we did a Weller and found ourselves ‘going underground’, entering the longest cycle tunnel in Europe, a cavernous mile-long hole burrowing through the hills to the South of Bath. This was memorable not only because of the refreshing change in temperature, but also because of the dinner party music being piped through speakers.

After a quick stop at the least welcoming pub in England, the pace was cranked up a notch as we attempted to chase EMO’s formidable MD Peter Brown up the biggest climb of the day, Midford Hill, a drag up to the village of Hinton Charterhouse. We then settled down into small groups and worked our way out through the Wiltshire countryside, past the Westbury White Horse, up and over Salisbury plain and on to Stonehenge where we would stop for the night in Amesbury. It was an amazing sight to see 30 or so cyclists snaking through the hills, all wearing the same Creston team kit, whilst two Jags in full Tour de Creston livery followed us as support cars. It was as close to feeling like I’m in Team Sky as I’m likely to get.

Tour de Creston - on the road

We wound down the day in a nearby pub, where I witnessed more than a few people eat two main meals, one straight after the other, and then tuck in to dessert. Hungry work, clearly.

Day Two saw us up early and tucking into a Little Chef ‘Olympic Breakfast’ (as eaten by Jessica Ennis). We then tried to convince our already stiff and achy legs that another day in the saddle (a total of 95 miles) was a good idea. We didn’t get off to the best of starts when the route took us down a muddy track that was full of sharp flint, tearing a few tyres right open.

New tyres fitted and punctures fixed, we rolled into Hampshire and on to the office of Marketing Sciences in Winchester, where we were warmly welcomed with a cake selection that would make Mary Berry’s eyes water. Refuelled, it was time to get back on the bikes before the lactic acid built up in our legs; we still had the small matter of another 70 miles in the saddle, and things were about to get hilly.

Marketing Sciences

The next stretch was a scenic meander through the South Downs National Park, and after a quick break in Petersfield, the real slog began with the amount of downhill never quite matching by the amount of uphill on offer. We eventually came out the other side to more gentle terrain, before the route played its trump card, Leith Hill – the biggest climb of the tour – 80 miles into Day Two. It’s a climb nasty enough to be featured in the legendary ‘100 Greatest Cycling Climbs’ book, and posed an interesting challenge to those with tired legs. Leith Hill done (along with the odd expletive), we enjoyed the view at the top and bolted down the other side into Dorking, and on to our final destination, Leatherhead.

It felt great to have survived Day Two, by this point we all knew that the hard work was done, and relaxed in the knowledge that Day Three would be the ‘easy one’. After a few victory pints and tasty Italian meal it was off to bed for a well-deserved rest.

Well deserved pint

Day Three started off with breakfast in the local Wetherspoons (yes we are classy), and before I could so much as digest a sausage, it was time to take on Box Hill. Famous as the venue for the 2012 London Olympic road race event, Box Hill is now the native habitat of the lesser spotted polka dot MAMIL, with legions of plump men and women whizzing up and down it. The Creston gang showed the locals how hill climbing is really done, with a dizzying display of dancing on the pedals and some rabid handlebar chewing.

Having claimed glory it was time for the home stretch, an easy run in to London via Richmond Park and along the Thames. Once past the Houses of Parliament and Buckingham Palace we reached our final destination – Creston HQ in Soho, after a total of 185 miles. Creston put on a fantastic rooftop BBQ for us and, after a whistle stop tour of the very impressive new TMW office, we all sat back and reflected on our achievements for a minute or two.

Nearly there!

It was an incredible effort all round, from those who just rode for an hour to those who completed the entire tour, I witnessed people of all abilities pushing themselves to the limit and smiling the whole way. It was also great to meet members of other Creston agencies and find out more about what they do. Special kudos to my fellow Real Adventurers, Amy and Enrique, who completed the full distance with me; Enrique’s effort being particularly impressive considering he rode the whole thing on a £60 bike from a supermarket! Proof that if you try, you can.

Enrique

A huge thank you to the organisers of the tour who put in so much effort and made it such a professionally organised event, Justin Moody (who selflessly gave up his place cycling to drive the support van at the last minute), the team at EMO for all their hard work, and the volunteers who drove the Jaguar Sportbrakes and saved many a cyclist in their hour of need…thanks so much! A heartfelt thank you also goes to Creston and our respective companies for allowing us to take some time out from work and for keeping us well fed and watered throughout the tour.

A real adventure indeed – bring on Tour de Creston 2014!

If you can, please donate to The MS Trust, the chosen charity of the Tour de Creston and a fantastic cause: http://www.justgiving.com/TourDeCreston2013

Posted in cycling, travel | Leave a comment

The rise of the quantified self – eight examples of life logging

Here’s a post I wrote for The Real Adventure blog recently.

UX quantified self and life logging apps

One of the big trends this year is the concept of the ‘quantified self’; a movement where wearable technology and smart phone apps combine to give people detailed insight into their everyday lives. This ‘life logging’ creates a unique opportunity for brands to keep a constant presence in their customer’s lives, by providing them with real value and helping them to improve their well being. This potential has been highlighted by the recent success of the Nike+ Fuelband, which has turned a loss making division of Nike into a highly lucrative business. Read on to learn about the potential of the quantified self movement, its moral dilemmas, and how it is changing the world around us.

Allez Wiggo!

I recently bought a heart rate sensor. It was the latest step in a steady creep towards logging almost every aspect of my cycling performance.

Now when I go out for a ride I can see my speed, cadence, power output and heart rate to confirm what I knew already: I’m not a professional athlete.

Although it’s a shame that I’m not going to be the next Sir Bradley, by logging the details of my ride and analysing it with software like Strava, I can quantify my progress in increasingly detailed ways and feel motivated by my small successes and continued self-improvement. It’s completely changed my approach to exercise and encourages me to do more of it, which can only be a good thing.

This kind of personalised data collection and analysis was once the domain of people with niche needs and large budgets, like professional athletes (or The Terminator), but in recent years it’s gone mainstream. This is mainly thanks to the proliferation of smart phones and the emergence of cheap, wearable technology such as smart wrist bands. This trend has been dubbed ‘The Quantified Self Movement‘.

 

What is the quantified self?

Firstly it’s important to note that the concept of ‘the quantified self’ is not restricted to tracking and analysing your exercise regime (although this is an aspect that many of us will have come across). In fact the quantified self is a movement to collect any aspect of a person’s daily life using technology (‘life logging’), and then subsequently present that data back to the user in a useful way.

At this very moment people are logging calories consumed, cholesterol intake, steps walked, locations visited, people met, sleep patterns, heart rate, mood, tasks completed, and more. Once this data has been collected, it’s up to the software to analyse it and present it back in useful ways, for example, answering questions like: “Did I consume less calories this week than last week?” “Where was I last Sunday at midday?” and “Have I met my monthly exercise target?” Often an aspect of ‘Gamification‘ is added so that you can compete with your piers – for some people bringing out their competitive side is the key to motivation.

By processing the finer details of our lives there’s an opportunity for technology to help us become healthier and happier people – to become more organised and feel more in control of our lives. In showing us the patterns of our lives, the quantified self is self-awareness through numbers.

 

There’s an app for that

Here are eight striking examples of services that contribute to the quantified self movement.

1. Memoto

Although a lot of the motivation to quantify your life is health related, you may equally be collecting data for different reasons – perhaps to collect memories and not miss what’s going on around you. For example the Kickstarter-funded Memoto is a tiny, wearable camera that automatically takes two photos per minute of your day, that are GPS tagged and time stamped. Memoto’s tagline is ‘Remember every moment’: what you see – it sees, creating a continuous life log for the wearer. The accompanying Memoto app organises the photos into a timeline, grouped by date and searchable by location and time. It’s an interesting project that raises tricky new questions about privacy – something that Google Glass has got people talking about recently. But this clearly wasn’t a concern to its 2,871 Kickstarter backers, and the thousands of pre-orders that Memoto has since received.

2. Lift

Lift is a smartphone app backed by two Twitter co-founders, that encourages users to change their habits. According to its creators, ‘Lift is a simple way to achieve any goal, track your progress, and get the support of your friends’. Break those bad habits and adopt more positive behaviours instead: turn those pie in the sky New Year resolutions into something trackable and quantifiable. Motivation comes in the form of positive feedback from the app – a big tick and green colours make it feel good when you complete a task. This is combined with social motivation; friends can give you ‘props’ along the way, a Facebook ‘Like’ style thumbs-up – an encouraging pat on the back.

What I like about Lift is that it allows you to track very small things that can make a big difference to your life, things like flossing your teeth, reading a book or drinking more water. Like most people, I’m guilty of having the best of intentions, but the busyness of life tends to get in the way – perhaps an app like this is the motivation I need to make sure it happens. Seeing so clearly that I only flossed twice this month, and not everyday like I hoped to, might be the trigger for improving my habits.

3. HAPIfork

The surprise talking point of this year’s Consumer Electronics Show (CES) in Las Vegas was an intelligent piece of cutlery. HAPIfork contains sensors that enable it to track how you eat, and alert you if you are eating too fast. It also measures how long it took to eat your meal, the amount of ‘fork servings’ taken per minute, and intervals between ‘fork servings’. It also comes with an app and coaching program to help improve eating behaviour. It’s a strange concept that’s unlikely to catch on, but it shows how sensors can be incorporated into anything, enabling us to collect data for whatever we want.

4. Jawbone UP

Jawbone make a couple of interesting products including the rather nifty Jambox wireless speaker. Jawbone UP meanwhile, is their attempt at creating a holistic life logging and self-quantification solution, using a smart wristband and accompanying app.

At night the wristband tracks your sleep, the app waking you up at the perfect point in your sleep cycle. By day, UP tracks your movement, distance walked, calories burned, mood, and food and drink consumed. An ‘idle alert’ reminds you to move around if you’ve been inactive too long, and the ‘insight engine’ visualises your information so you can understand the meaning behind the numbers and discover hidden connections in the way you live. UP gives you messages of encouragement when you meet your goals, positively reinforcing good behaviour. In the words of the manufacturers; “Over time, insights lead to new behaviors and new behaviors become new, healthier habits.”

So if you believe the hype, it’s a one stop solution to improving your life. But as automated and intelligent as it is, there’s still a decent amount of data entry and admin involved. You need to tell it what you ate, set up the right alarms, and read and act upon its recommendations. It might improve your life, but it doesn’t simplify it – I wonder if, in reality, it creates another thing to manage and keep on top of: a new burden that can create stress and friction. Is this out-weighed by the benefits of feeling in control?

5. FitBit

Fitbit started as an exercise-tracking device, but seems to be evolving into a more holistic life logging solution similar to Jawbone UP. It currently enables you to monitor your exercise, diet and sleep, and manage your weight. The latter is achieved using an additional product – the Aria smart scale, a WI-FI enabled scale that assess your weight, body fat % and BMI. Like all their devices, it then wirelessly syncs your stats with an online graph and mobile tools to show you how you are tracking. The software then allows you to mark your progress, set goals, earn badges and compete with your friends.

FitBit Zip, their original exercise monitoring device, has a friendly, organic form factor, bright colours, and a cheeky face. It doesn’t have that cold, masculine or futuristic feel which a lot of this technology suffers from – it might appeal to middle aged mum who does Weight Watchers – surely a smart move in helping the device go mainstream.

6. Nike+ Fuelband

The Nike+ FuelBand, by contrast, is a testosterone-fuelled design soaked overnight in a pit of endorphins, and feels much more ‘high performance’. If the Fitbit appeals to those giving a bit of gentle jogging a go, FuelBand may look like it came out of the props department of the Tron movie. The device is a wristband with a built-in accelerometer and beaming LED display, which tracks how much you move and gives you an ongoing numerical score for this activity. You can compete with friends and set goals to beat. When you do beat one, it sends a fireball whizzing around the interface, setting the screen on fire.

The ads read, ‘Life is a sport. Make it count’. So housework, walking, and dancing in a nightclub all count towards your daily score. It feels a bit dumbed down compared to the other devices, and squarely aimed at the mass market. It’s a fashion item with youth appeal, and although lots of fun, I doubt will appeal to those with a more serious interest in fitness or sport. In fact I wonder if it’s actually preventing some people from doing proper exercise, by rewarding them for doing the things they were doing anyway. Users have even reported that a bumpy car journey increases their Fuel Score.

On the other hand – if you were wondering when all this self quantification stuff was going to hit the mainstream – this is it! It’s simple enough for anyone to understand, fun, and not elitist about what it considers ‘exercise’. The sales figures speak volumes: it sold out four hours before it was launched, and Nike Inc.’s Equipment division saw an 18% rise in profits after they introduced the Nike+ FuelBand, in comparison to a -1% loss the previous year.

7. S Health

Further proof, if needed, that self-quantification is going mainstream was the recent announcement of the Samsung Galaxy S4, likely to become the biggest selling Android phone of 2013. The launch was the usual smartphone stat-off – ‘it’s x nanometers thinner…faster than a CERN supercomputer…has a pointlessly high screen pixel density’ and so on. What caught my eye though, was how Samsung were referring to the phone as a ‘life companion’, and that it was shipping with powerful life logging features via an app called S Health. It can track your mood, food intake, sleep patterns, exercise, and even your weight, blood pressure and blood glucose via some peripheral devices – which makes it sound a bit like having a GP in your pocket. It’s a surprisingly detailed self quantification app, and will no doubt get used by millions of people.

Perhaps a future version of iOS will feature something similar, which seems likely considering Apple CEO Tim Cook is on the board of directors at Nike, who make the FuelBand.

8. Reporter

Quite possibly the king of the ‘quantified self’ is info-addict Nicholas Felton, most famous as designer of the Facebook Timeline. Feltron has long been admired by graphic designers for his ‘Annual Report’ work, in which he attempts to log every tedious detail of his life and then present it as a highly detailed series of infographics. Each year he attempts to record more information about his life than the previous one, and last year he commissioned an app, ‘Reporter’, to help further regiment his data collection. The app prompts him for information every 90 minutes, automatically logging his GPS position and asking him to answer the same questions each time: ‘Where are you?’ ‘Who are you with?’ ‘What are you doing?’ ‘How productive were you today (on a scale of 1-5)?’. And so on.

The fruits of this obsessive life logging are his 2012 Annual Report, which shows almost every detail of his life, including the fact that he drank 1,484 glasses of water and one ginger beer last year. The point of this? Apart from being a great piece of graphic design, it’s a social experiment, life as art, a visualisation of all those little things that we don’t notice happening. It’s banal detail and honesty are an interesting counterpoint to the more carefully curated information that people add to Facebook.

 

What next? Brain implants?

We live in a brave new world. While the idea of recording every aspect of your life used to be considered an Orwellian nightmare, we are now signing up in droves just to be part of it. But what happens to all the data that you collect, where is it held? Is it sold to third parties? What if it got hacked? There’s talk of the ‘end of privacy‘, a new dawn where everyone is recording and sharing every aspect of their lives and there’s no longer a legitimate concept of privacy to hide behind. Google co-founder Larry Page has even mentioned a not so far-off age where we all have brain implants, to make it all that much easier. To the new generation, public sharing is the norm, privacy perhaps more of a worry to an older generation who remember a pre-Facebook world.

But is this really what we want? Each technological step we take forward seems to give us more, but at the price of sacrificing something else.

Self quantification is ultimately a trade off between knowledge and empowerment and the perceived benefits, versus time invested and privacy removal. I really do believe it can have benefits, I’ve witnessed them myself with my cycling activities, and I’m thinking of trying out Lift to track macro-behaviours that I’d like to improve (note to self: remember to floss tonight). The big question I have though, is do these apps free us, or create more stress and burden in our lives?

They tell us more about ourselves, but could ignorance actually be bliss? In sweating the detail of our lives we unearth new problems to worry about, have to manage more, and ultimately become a slave to numbers. The key is motivation – if someone really has a desire to change their behaviour, then these apps can help them achieve it by showing their progress in a tangible manner.

The challenge for us software designers is in creating an experience where we get the trade off right : make it quick and easy, reward good behaviour, don’t nag, allow for erratic behavior (skipping days etc.) – make the software fit into the user’s life, not the other way around.

Time will tell if this is really the future as, in the long run, people will only adopt technology if it’s actually of benefit to them.

 

Do you think life logging and the quantified self movement could be the future? Let us know in the comments below.

Posted in Futurology, mobile, social media, technology, UI, UX, wearable | Leave a comment

Five steps to more usable links

Here’s a post I wrote for The Real Adventure blog recently.

Link usability and UX

How one man’s dream became a billion links to kittens

Have you ever had to describe the World Wide Web to someone? Probably not, or at least not since about 15 years ago, when a less technologically savvy family member asked you to.

It has become such a constant part of our lives that it’s easy to forget exactly what the Web is. We’ve come to relate to it in terms of the content we view on it; news, Facebook, funny videos on YouTube, cute pictures of kittens and so on, rather than the medium itself. This is in much the same way that we might think of television in terms of watching EastEnders or the X Factor final an Adam Curtis documentary, rather than as a box for receiving moving images and sound.

So what is the Web then? Yes it’s Facebook and funny kitten videos, but what is that content fundamentally made up of?

Tim Berners-Lee proclaimed, back in 1991, that the ‘reader view’ of ‘The WWW consists of documents, and links’. Brilliantly simple – no wonder it caught on.

As well as connecting the documents of the web together, links are the work horses of the Web – they act as sign posts, showing us where to go. They build our ranking on search engines, they persuade us to do things, help us learn, connect with each other, and add context and citation.

So, for such a key element, it’s surprising how often they under-perform in terms of usability.

The first rule of Link Club

So what are the rules for using links? How can we make them perform better? As a set of guidelines to help improve the usability of your links, I’ve dug through some research on the topic and added my own thoughts.

First a quick disclaimer – there are many types of links; navigation, calls to action, pagination, buttons and so on, but in this article I’m just talking about the use of links in body copy; the type of content we might all enter into a CMS or blog at some point or other.

1. Never use ‘click’ or ‘here’. Even worse: ‘click here’.

Not only is the phrase ‘click here’ device-dependent (it implies the use of a pointing device such as a mouse), it says nothing about what will be found if the link is followed. Instead of ‘click here’, link text should indicate the nature of the link target, as in ‘more pictures of funny cats’ or ‘text-only version of this page’. Just remember that while some of your readers may be using a mouse, they could equally be using a screen reader, voice control, keyboard or touch screen, or could have printed your web page on to paper.

A general rule is to avoid talking about the mechanics of the Web where possible – if you owned a shop, you’d write ‘Welcome’ on the door, not ‘Open this door to enter the shop.’ Using the word ‘here’ also conceals what the user is clicking – the link should ideally describe what it links to. More on that shortly.

2. Be descriptive.

This follows on from Point One – links should describe what they link to, or do. If it all possible, the wording of a link should act as a page title for the content that you are linking to. The user should be in no doubt about what will happen if they click a link, thus avoiding them having their expectations dashed. We shouldn’t rely on the text around a link explaining it either, as we need to optimise for scan reading (see Point Three).

Another reason to be descriptive is for search engine optimisation (SEO); a descriptive link will affect how that link performs in search engine rankings.

Sometimes we need some extra information in our description – is it a 5kB or a 50Mb PDF? A 30 second or a 120 minute video? Help avoid nasty surprises.

3. Optimise for scan reading.

A common misconception made by authors is that their audience will read almost every word on the page, but sadly this is far from the truth. According to Jakob Nielsen’s research findings (‘How little do users read?‘) users can be expected to read just 20% of the words on a page. This is because people don’t read on the Web, they scan. Scanning visitors cast their eyes rapidly over a page, picking out pieces of information to build their own mental picture of what the content is about. To the scanning eye, links (being underlined and in a different colour) stand out on the page, which means that their wording will be used to help make those quick decisions on whether to stay or leave the page. So in short – make them count!

Remember that the words you link will give readers a feel for the subjects covered in the page, so be careful not to unintentionally skew this by linking lots from one topic or keyword and not another. Try to balance your links so that a visitor can scan read your page and get a genuine feel for what it is about.

It’s also a good idea to keep the number of links on a page down – too many will disrupt the user’s ability to scan read the page easily, making the copy harder to read, and ultimately losing you visitors.

4. Keep it simple, stupid!

Keeping things simple and familiar to the user are good ways of making them easy to use, and links are no exception. Links should look like links; underlined and clearly differentiated from other text through the use of colour (traditionally blue).

They should have appropriately sized hit areas; this has become more of an issue since the recent proliferation of touch screen devices. How often have you tried opening a link on a smartphone or tablet and accidentally got the one next to it? Links need appropriate spacing around them to avoid this, and hit areas should be large enough to click easily, as dictated by Fitts’s Law.

Links should nearly always open in the same window. The user can choose to open a link in a new window or tab if they like, but you shouldn’t make that decision for them. Don’t remove choice, keep the user in control.

5. Don’t radically alter link behaviour.

As I said at the start, hyperlinks date back to the dawn of the Web. And from that point up to today, we’ve got used to using them. So if you decide to alter how they work, you are going to confuse people by confounding their expectations. Adding things like adverts that appear on rollover ruin the user experience and gain you very little in return. Please promise me that you won’t use them?

Go forth and link to kittens

So there you go – five ways to make your links more user-friendly. There are exceptions to every rule, but I hope that you find these guidelines useful. If you’ve got any more tips for improving the usability of links, please add a reply below.

Posted in seo, strategy, UI, Usability, UX | 1 Comment

Preparing for Birth app

A project that I’ve been involved with has just come to fruition: a new smartphone app that helps mums-to-be get ready for the rather daunting task of giving birth.

The app is designed to help expecting mums feel confident and prepared for labour by giving them practical tools and expert advice; with features such as to-do checklists, timely tips from medical experts and experienced mums, a contraction timer and useful articles. The app also includes a personalised birth announcer so mums can share the news of their new arrival with the world; upload a photo of their newborn, add a message, choose a visual theme and share with family and friends.

Features and content for the app were identified by conducting user research; i.e. talking to recent mums to discover what they would have found helpful during pregnancy, and then discussing and validating our ideas with health care professionals, such as midwives and pregnancy advisors. I was running the UX side of the project so was very much instrumental in shaping its features and functionality, as well as designing the UI. It’s been a great project to be involved with – kudos to the brilliant team at The Real Adventure. It’s currently ranked number 3 in the app store under the health & fitness category.

Download the app from the App store.

Posted in apps, mobile, UI, UX, work | Leave a comment

Google Glass

Google have released more details of Glass, the wearable device that adds an augmented reality layer to your vision. The video above shows what the device will be capable of – and ‘how it feels’.

For me this is incredibly exciting, truly frictionless interaction must be the goal on anyone working in UX. The UI looks great too, using the simple and elegant interfaces that we’ve been seeing in Google Now and so on. A product as revolutionary as this raises some questions of course, privacy being the big one. But for now, lets enjoy the utopian possibilities – dystopia is for when the honeymoon period is over (and when we remember that voice control is terrible to use). Hats of to Google for this one – I’m sold, if I could get one now and it wasn’t crazily priced (they claim it will be the same price as a smart phone), I would.

The full details (so far) can be found at www.google.com/glass/

Posted in Futurology, mobile, technology, UI, UX, wearable | Leave a comment

DBA 2013 Design Effectiveness Awards

Bath Rugby website

Some great news recently was hearing that two websites I’ve been involved in have gone on to win DBA Design Effectiveness Awards!

Both websites were worked on during my time at Positive, and were all very much a team effort, but I’d like to hope that my small contribution to these projects helped clinch the win!

The Bath Rugby suite of websites (main website, mobile, online shop), which I did the IA and UX for (as well some of the front end build) won a Gold award which is fantastic news. It’s an especially good feeling as we had an incredibly tight deadline so pretty much killed ourselves to get it done on time! Glad it was worth it. There’s some great stats to back up the win; an increase in newsletter subscriptions of 337%, from 8,000 to 35,000, ticket purchasing increased by 28% per match and online shop sales showing an increase of 35%.

Similarly the Hope and Homes website I developed the IA and UX for scooped a Silver award. It’s great that this is doing so well as its for a charity that change the lives of children – a very worthy cause. Since launch the website returned a 122% increase in individual donations year-on-year, regular giving increased by 162% and the charity was awarded a £240,000 grant. Visits to the website increased by 69% in the first five months – and all for a fantastic cause!

The guys have written some great case study’s for Bath Rugby and Hope and Homes where you can learn more.

Congratulations to the excellent team at Positive for this well deserved win!

Posted in awards, UI, UX, work | Leave a comment

Cloudy with a chance of iPhones – the UX of weather

Here’s a post I wrote for The Real Adventure blog recently.

the weather

Although it can at times feel tediously familiar, our geographical position makes our weather hard to predict, and at times it can feel like we experience all four seasons in one day. Deciding what to wear, and even what to do, is a decision best made after consulting your favourite weather source – something we’ve all been doing a lot more of during the recent cold snap.

For some people it’s watching the evening news, for others it’s a website or their trusty iPhone weather app. For me it’s a bit of an obsession – I think it goes back to my childhood in Cornwall when I was constantly checking weather reports to try and work out the latest surf conditions.

Nowadays I find myself glancing at weather reports throughout the day; dipping in and out quickly and often.

I’m not really looking for much detail during these visits, just answers to basic questions like: ‘Will I get soaked on the way home?’, ‘Do I need a warm coat?’ or ‘Should I go on that 60 mile cycle ride on Saturday?’.

Recently I’ve spotted some interesting apps that could help answer these basic questions, whilst describing our weather in new and often beautiful ways.

Continue reading

Posted in apps, Futurology, reviews, technology, UI, UX | Leave a comment