The Do’s and Don’ts of Designing Voice User Interfaces

Speech is a pivotal and important part of the way humans interact with each other. From the tone of voice to the words we choose to use; voice is pivotal when having a ‘human’ experience.

This also applies to AI Voice. We’ve all been there: on the phone, talking to an AI that has been programmed with certain phrases and sentences, but we know that we are talking to a machine. So the question is, how do we design Voice interfaces for them that sound like a real human being?

We Don’t Type the Way We Speak.

When we are designing for the Voice Interfaces, one of the key things we need to remember is that we as humans do not talk the same way we type. Say we wanted to find the nearest music shop, if we were to open google in our respective browsers, we may type something like “nearest music shop to me” and then be hit back with pages linking us to the nearest one.

However, if we were to ask someone the same question, face-to-face, we would say something along the lines of “Do you know where the nearest music shop is?”. The same applies when designing a Voice Interface for AI like Alexa or Google. We have to take the way humans speak into consideration. Even though we know that we are talking to a machine, the experience needs to be as fluid and natural as possible.

When presented with anything familiar, we as humans automatically feel more comfortable and respond better. The way we speak to each other is constantly evolving, so it only makes sense that the Voice Interface of AIs should accommodate that.

Remembering Key Details.

Another important thing to factor in when designing the Voice Interfaces is that personalization and memory is key.

We are able to change our ringtones, save our bank details, have certain apps and sites remember our login credentials and much, much more. When it comes to AI, these sort of elements should still apply. Just like with the way we speak, the things we remember are familiar. Amazon Alexa allow this sort of thing by asking for your details once when ordering something on Amazon and then Alexa has the option to remember those details.

This allows humans to feel like the machine isn’t just code and wires, but rather a complex compilation of those things and more. It gives the AI more of a ‘human’ feel. Familiarity and memory factor into the way we converse.

As humans as well, we don’t like to be overloaded with information and having to remember lots of commands and details, so this helps in that sense as well.

Accessibility.

With anything, accessibility is one of the first things humans look for. This still applies to Voice Interface and Design. As these AI integrate more into society, they become more commonplace amongst us.

So, when designing the VI, we have to take into consideration that humans are different. Some may have a speech impediment or a hearing impairment for example, and our Voice Interfaces and AI should be able to work with this. Even accents can change the way AI registers what speech it heard.

When designing, think about these things and how to work around them. When humans feel their own needs have been met, they respond more positively and feel a sense of care from the product manufacturers.

Avoid Bias and Personal Opinion.

As the designer, you have to remember that these AI are not personally designed for you. So avoid programming in and designing the AI with your personal opinions. An example of this would go like this: a customer decides to ask Alexa or Google what are the best trainers to buy and you’ve programmed in Nike’s Jordans as the answer. There is no official confirmation from anyone that Jordans are the best trainers, that’s just the opinion of the designer. Think globally sometimes, not just for the individual.

When designing the voice and programming it, remember that every human being has a different opinion and viewpoint, and this should be reflected in way the AI responds to questions like this example.

The power of speech and the voice have been constants in humanity. Slight changes in tone get noticed, one word can mean two different things depending on how the person has said it. The overall aim when designing AI and Voice Interfaces is to remember that if it sounds and feels like you’re talking to another human being, even when you know it’s a machine. Be mindful of others and the differences between humans when designing, when it works, it works right and engages people positively.

Enterprise and public sector trust Mobilise to securely transform their tech, teams and how they do business.

Say hello to your independence with our project enablement approach.