Touchless interaction – why the future is about letting go

Touchless interaction

It seems like just the other day we were discussing the move away from the mouse to the touchscreen, but such is the current pace of technological change and innovation, that now the talk is of a ‘post touch screen’ world. New technologies are pushing the boundaries of what is possible without touching or clicking an interface – this gives us the opportunity to create more natural-feeling interactions than ever before, in scenarios that wouldn’t have previously been possible.

This blog post outlines some of the main technologies and interactions on offer, and how we can all benefit from them today.

From GUI to NUI

Since the dawn of computing we have had to adapt our natural human behaviour in order to interact with computers. Hunching over and poking at buttons on our glowing mobile and desktop computer screens, interacting with a graphical user interface (GUI) has become synonymous with modern life.

But the GUI is a workaround, albeit a very clever one, that enables us to control a computing device. It’s a compromise between how a computing system works and what a human can easily understand; by abstracting the inner workings of a system into a set of buttons and other controls, we can command it to do what we want it to do. As much as interaction designers like myself try to ease the pain by designing user-friendly GUIs, controlling a computing device using a GUI still feels less natural than other activities in our day.

But what if computers and their software were designed to fit in more with natural human behaviour, rather than the other way around? For years this hasn’t really been possible, largely because of the limitations of technology, but now a new generation of sensors and intelligent software means that natural interactions are more possible than ever, and we no longer need to rely solely on touching or clicking a GUI on a screen.

Such seamless interactions are often referred to as natural user interfaces (NUIs), and they could just sound the death knell for the GUI.

Can’t touch this

Touchless interaction doesn’t just open up the possibilities for more natural interaction however, it also provides a way of interacting during those times when you can’t touch a GUI – perhaps you have messy hands in the kitchen, are driving a car, holding a baby or are a surgeon carrying out an operation. With touchless interfaces we are now able to control a device and enjoy its benefits in nearly any scenario, many of which would not have been practical before. It also means that we can now interact with devices that have very small screens (where GUI interaction is particularly problematic), or even devices that have no screen at all. From talking to smart watches to making physical hand gestures at your TV, it’s the dawn of a new age of computing, where connected devices are even more integrated into our lives.

So how can we control a device without touching or clicking it? Witchcraft? Smoke and mirrors?

Most of the methods available involve sensors that are already built into modern smartphones, such as the camera, microphone, accelerometer, gyroscope, and wireless communication technologies such as Bluetooth or NFC. Below is a rundown of four key opportunities for interacting with a device without touching or clicking it, and some examples of their implementation.

1. Voice

Talking – it’s one of the main methods that we use to communicate with each other, so interacting with a device using your voice seems a natural step for software designers. Human speech and language is highly complex and hard for a machine to interpret, so although it’s existed in its infancy for a long time, it’s only relatively recently that the technology has become a serious contender for touchless interaction.

Siri_750wWhen Apple acquired Siri and built it into the iPhone 4S, voice interaction came of age – for the first time, people seriously considered talking to their computing devices instead of touching them. What’s particularly interesting about Siri is that it uses a natural language user interface, allowing the user to have a conversation with the device. Apple gives the following example: ‘When you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighbourhood.’ It allows for a more natural, conversational interaction. Although Siri isn’t perfect, it’s made huge inroads into that tricky beast: intelligent software capable of understanding human language.

Siri and other natural language interfaces such as Google Now move our relationship with computers away from typing long search queries and endless button clicking, and replace it with the kind of conversations that we might have with each other.

Voice control can be particularly useful in certain scenarios, for example, driving a car (Apple is working with various car manufacturers to integrate Siri into cars), in the kitchen when your hands are mucky (The Nigella Lawson recipe iPad app allows you to move through ingredients and recipe instructions with voice commands). As anyone who has used a touchscreen keyboard knows, it’s hard work and it’s easy to make mistakes, so voice input can offer a welcome relief from keyboard input – providing that it works flawlessly.

On the web, HTML5 introduces the Web Speech API, technology giving modern web browsers the capability of speech recognition. Combined with voice output from the system (iOS7 features the Web Speech Synthesis API), it’s now possible for a user to have a conversation with a website. The possibilities are endless, particularly on mobile, and uses like language translation and dictation are just the start.

The other growing importance of voice control is more extreme. As technology is getting ever smaller and more disseminated around our homes, there’s a need to control it in new ways. A device may be too small to have a screen or keyboard, and buttons may be impractical. For instance, Google Glass can be controlled through voice commands, which is, in theory, easier than fiddling around with the tiny buttons that live on the side of the device. Through the voice command ‘OK Glass’, the device starts listening and the user can then use natural language voice interactions to control the device. Google have also employed this ‘always listening’ feature in a recent phone, the Moto X, which can be woken up from sleep and used freely with voice commands, without the user ever going near the device, as demonstrated in this advert. The growing trend for smart watches such as the Samsung Galaxy Gear and long-rumoured Apple iWatch further underlines how technology is getting ever smaller, and requires ever more sophisticated voice controls.

Voice-based interaction does have one downside which I feel is a barrier – using it in public. Personally, I feel very self-conscious telling my phone what to do, but I’m sure this is something that will change as it becomes more and more common.

2. Accelerometer and gyroscope

ShakeTwo sensors that are built into nearly all smartphones and tablets are the three-axis accelerometer and the gyroscope, which can be used to detect rotation and movement of the device. Usually used to switch between portrait and landscape mode based on screen orientation, they also have the potential for touchless interaction. One example is the Google Maps mobile app – if you angrily shake your phone whilst using the app, it opens up a dialogue box asking if you would like to give feedback – presuming that something went wrong. iOS has several accelerometer-based interactions built into the operating system, including ‘shake to undo’, where a shake movement gives the option of undoing the last thing you typed.Good gestures feel natural, as they relate to things that we do already, such as shaking your phone in frustration when it’s not working, or resetting something with a shake. More subtle uses of the accelerometer and gyroscope can also feel like a natural way of controlling it, e.g. tilting and rotating your device to steer in a driving game. Parallax.js reacts to the orientation of your device, allowing for tilt-based interactions in the web browser.

3. Cameras and motion sensors

Another opportunity for touchless interaction is by using imaging sensors such as a camera to interpret the world around the device. If a device can ‘see’, then it can offer new modes of interaction, such as physical gestures, motion tracking and facial recognition.

Facial recognition can be used as a security feature, eliminating the need for passwords and making unlocking a phone faster and more natural – just look at your phone and you can use it, no need for a passcode (although both this and Touch ID fingerprint scanners have been shown to be vulnerable to hackers). Samsung released a myriad of (albeit gimmicky) hands-free interactions with the Galaxy S4, including: ‘Eye Scroll’ (eye tracking technology that allows the user to scroll through a page just by moving their eyes), ‘Tilt Scroll’ (scroll a page by tilting the device), and ‘Air Gestures’ (control the phone using hand gestures). Whilst playing a video on the device, looking away from the screen can automatically pause the video you are watching, then resume it when you look back.

smart tv
In the living room, Smart TVs are getting smarter by offering a whole host of touchless interactions. No longer do you need to hunt for the remote, instead the built-in camera recognises your face and logs you in to your profile (giving you instant access to your favourite content). Hand gestures can then control TV functions by swiping to navigate and grabbing to select, finally a voice command turns your TV off when you are done. It might sound like something from Tomorrow’s World, but amazingly it’s all possible from a £500 TV you can buy on the high street today.

Much of this technology was first brought into our homes by games consoles such as the Nintendo Wii or Microsoft Kinect for Xbox 360. The Microsoft Kinect features a combination of 3D depth sensors, RGB camera, and a multi-array mic, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. These allow the user to control and interact with it without touching a games controller. It was a game-changer that has brought touchless interaction into 24 million living rooms.

The Kinect hasn’t just been great for the living room however, its software development kit (SDK) has allowed programmers to use the Kinect’s sensors for a wide variety of scenarios outside home entertainment. ‘Kinect Hacks’ are posted almost daily on the internet, and range from art installations, music videos, controlling a robot, live musical performance, to vascular surgery. A thriving community like the Kinect ‘hacking’ community helps showcase the possibilities of touchless interaction.

PrimeSense, the makers of the Kinect’s motion capture sensors, released an interesting video (warning: very cheesy) to showcase Capri, their next generation of 3D sensors, which they claim will bring precise, fast motion tracking to everything – from laptops and TVs to elevators, robots and appliances everywhere. It looks like ‘Kinect Hacks’ are going mainstream.

Leap Motion is a 3D motion control device that purports to have 200 times greater sensitivity than Kinect. It’s a tiny device that can be plugged into any computer with a USB connection and enables touchless gesture interactions on the desktop, similar to the Kinect. We recently got one at The Real Adventure – it’s amazingly accurate and intuitive, and has real applications (beyond the initial sensation that you are Harry Potter or in Minority Report) and its own ‘app store’ full of tools and games to play around with. Autodesk have created a plugin so that you can use the Leap Motion to control Maya, a piece of industry-standard 3D graphics and animation software. By using the Leap’s sophisticated 3D sensors, you can manipulate 3D shapes on the screen using your hands, in a way reminiscent of a traditional model maker or sculptor – a natural interaction with all the benefits of using computer software (the ‘undo’ button being one).

Other motion controllers include: The MYO armband, which lets you use the electrical activity in your muscles to wirelessly control your computer and other devices. Haptix is a sensor that turns any flat surface into a 3D multitouch surface. Although not technically a touchless interaction, it’s another example of how we are moving away from traditional screens.

By giving a computer ‘eyes’, cameras and other motion sensors offer new types of interactions, from face recognition to gesture control. They open up the opportunity of interaction in new spaces, and provide a truly personalised experience. The next step will likely be a better understanding of context from the computer, by understanding who you are, where you are and what you are doing, and maybe even the emotions you are experiencing, computers will be able to use this contextual information to automatically adapt to our needs.

4. Bluetooth and NFC

Wireless communication technologies such as Bluetooth and Near Field Communication (NFC) allow for interactions based on proximity and location. For a long time now, we’ve been promised both jetpacks and contactless payments through our mobile phones. Like the personal jetpack, it’s never quite happened – but it’s close. Whereas countries like Japan have been embracing NFC-based payments for years, it’s never quite caught on in the West, partly because Apple has never incorporated NFC into the iPhone. But now, Bluetooth low energy (BLE) is being touted as the next big thing for contactless payments. By using BLE beacon transmitters and geo-fencing, consumers can be identified, along with their precise location. Marketed by Apple as iBeacon, BLE chips ship in most modern phones, so the support should be much wider than NFC, and it might just change the way we do commerce on the high street. Instead of queuing at the till to pay, the future may see our accounts automatically getting debited as we leave the store, and loyalty rewards offered based on our location history. Many see commerce without tills as the ultimate touchless interaction. Walk into a shop, order a sandwich, eat, walk out. No fiddling around with card machines and long queues for the checkout, as PayPal’s new Beacon solution demonstrates.

Bump, recently acquired by Google, is an app that allows two smartphone users to physically bump their phones together to transfer contact information, photos, and files to each other over the Internet. When bumping their phones, software sends a variety of sensor data to an algorithm running on Bump servers, which includes the location of the phone, accelerometer readings, IP address, and other sensor readings. The Bump algorithm figures out which two phones felt the same physical bump and then transfers the information between those phones. Bump makes transfers through software, while devices with Near Field Communication (NFC) chips transfer data through software and hardware. Its an impressively complex use of various technologies which all work together in sequence to deliver a really simple, touchless interaction for the user.

Wireless communication technologies are touchless by nature, but often activating them at the right time is the clumsy bit. Technologies like Bluetooth LE and NFC allow for data to be securely exchanged and managed behind the scenes, leaving people to get on with their lives. Exchanging files can become quick and seamless, and in retail, long queues can become a thing of the past with loyalty properly rewarded, improving customer retention.

Wrapping up

So there you have it, four methods that we can use to create touchless interactions: voice; accelerometer and gyroscope; camera; and wireless communication. By replacing or enhancing our graphical user interfaces with touchless interactions we can make our products and services easier to use, feel more natural and become available in new environments, such as the living room or in the car. As with anything new there are mistakes to avoid, for instance Samsung’s touchless innovations on the phone feel more like gimmicks than solution to any real user need. A touchless interaction needs to be solving a problem, and enhancing a product or service, rather than complicating it. But where they employed well, such as in the Bump app, touchless interactions make a product or service more pleasant to use, and therefore make the consumer more likely to use it.

Now, why not use your Leap Motion to interact with the sharing buttons below, or leave a comment using your browser’s voice input feature?

This entry was posted in apps, Futurology, mobile, technology, UI, Usability, UX, wearable. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *