If you have watched the Iron Man movie of the Marvel Series then you must have been fascinated by the voice-enabled AI of the protagonist Tony Stark aka Iron Man. His AI assistant Jarvis can do almost everything for his master Tony Stark.
You would be surprised to know that we have reached halfway there not in reel but in real life. In today’s time, voice-enabled AI assistants have become ubiquitous. They are available on our smartphones or as smart speakers and in many other devices. Some of the popular voice assistants are Google Assistant, Amazon Alexa, Apple Siri, etc.
As an inspired mobile app developer or an entrepreneur who wants to develop his own mobile with voice user interface then you must continue reading this article in which we discussed all important aspects of intuitive voice app search design.
What is a Voice User Interface?
The Voice user interface has a very simple definition: It is an interface that allows interacting with a system consisting of both hardware & software through human voice commands. The most prominent advantage of VUI is that you don’t need to see or touch the device for interacting with it.
Just like any mobile application working on a smartphone or any other device, there are three different layers in VUI that should work together for proper voice commands interaction. Each of these three layers using the layer that is below while supporting the one above. In the upper two layers, the voice interface lies, residing in the cloud, and not on the device.
Why is Voice Search getting popular?
The important reasons why voice search is getting popular are?
Speaking is more intuitive than typing:
Do you know that every 5th search made on Google is a voice search? It is because people prefer to search through voice as they have to put very little effort through voice in comparison to typing. With voice search, you provide an additional facility and not bound the users by searching through only typing. This is the reason why voice search is getting so popular and why almost every new website and mobile app is coming with a voice user interface.
Helps technology be prevalent:
Voice search technology is now not just limited to typing messages, calling a person, or opening an app. But the advanced research in this field has open gates to use your voice for turning your lights on & off, manage the devices of your home, do online shopping, or even start your car – just by using your voice.
Moore’s law:
The simple definition of Moore’s law is that: In every two years the capability of our computers or other devices increases which reduces their cost. The growth is exponential. Taking this law forward, we can see that initially voice technology was just an experiment but now it has become part of everyday life technology.
More practical for specially-abled:
Voice technology is like a boon for specially-abled people who can’t easily type their queries in the search box. They can now use devices and smartphones by overcoming the obstacles they have to face earlier with visual user interfaces.
Apart from the VUI, there are many other forms of UI. So it would be easier for us to know the in-depth view of the VUI.
Speed:
Speed is the main difference between other graphical user interfaces and voice user interfaces. Voice search is 3.7x faster than the traditional method of search by typing. In fact, an average person can type 30-35 words in a minute while he can speak 100-130 words per minute.
Text vs. natural language vs. gestures:
Visual interface has not remained a king anymore. Though it is the preliminary interface, our generation is slowly moving to other better interfaces such as voice-based and gesture-based interfaces.
In the voice-based interface, as we discussed above, it takes voice-based commands for performing actions. While in the gesture-based interface, you need to perform certain gestures just like in gaming consoles. For e.g. Sony Playstation. There is one more player – experimental-thought-based interface. So far, lie-detectors are the only prominent use case for it.
Privacy:
Another big concern and a major difference between these interfaces is Privacy. You can make text-based, gesture-based, or touch-based interfaces private, but this is a major challenge for voice user interface design.
How to design a Voice User Interface (Steps)?
Identify your Audience:
Just like designing the interface of any other digital product, you have to focus on user-first design while designing the voice user interface. The aim is to understand the user behavior and gather data & needs of the users. Your main focus should be:
· Identify the users’ pain points along with their experience. So you can analyze where the users can benefit
· Gather information about the language spoken by the user, how they talk, what phrases they often use. It will help you design the UI for different utterances.
Define:
In the second stage, you have to define the capabilities and shape the product. This will include:
Creating different key scenarios:
There would be specific ideas of the app which you have to identify for turning the interface into a conversational dialogue flow. They are a way to think about why someone would need to use a VUI. So, you need to design scenarios having high value for your users.
Make sure the scenarios work with voice:
The main priority here is that the users must be able to resolve a query more efficiently than they would be able to with the alternatives. The aim of this step is to find the common and specific cases which users will benefit from.
The two main causes are:
a. When users are preoccupied and can’t use the visual user interface
b. When they want to do something very quickly such as commanding VUI to “play some music”.
3 Important Factors – Intent, Utterance, Slot
Let’s understand these factors with the above-mentioned example of “Play some music”.
Intent: It shows the broader objective of the voice command. There are two main types of intents:
a. High utility intents: These intents are very specific and direct commands. For e.g. Turn on the lights of the room.
b. Low utility intents: In oppose to high utility, these intents are not so specific and are vague. There can be multiple ways of implying these intentions. For e.g. “Play some music”. This can mean “I want to hear a song” or “Can you play a song” etc. All the designers have to consider these variations.
Slots: When the intents are not enough for performing action then slots come into action. Slots are the extra information required to deliver the results for a query. They can be optional or mandatory.
Create a prototype:
In the voice user interface the interaction is through the dialogue flow. There will be an answer to the question of “How to create voice interaction between user and technology”. In the dialogue flow, some of the key points to covers are Main keywords of interaction, branches where the conversation could move forward, and example dialogues for users and assistants.
Here the dialogue flow would be a prototype that will illustrate the back and forth conversation between the voice assistants and users. There are several prototyping tools to use for your VUI. The most popular ones are Amazon Alexa Skill Builder, Sayspring, and Google’s SDK.
Writing Dialogues:
The building block of the voice user flow is a compiled set of dialogues. Here are a few tips for creating engaging and conversational dialogue:
· The process shouldn’t be tedious or long. Try to keep minimum possible steps
· Don’t teach commands to the users. Voice should be the natural flow.
· Try to keep the questions and responses brief. Here are do’s and don’ts:
Don’t:
User: Tell me a store where I can buy Nike Shoes:
System: There are 5 different Nike Stores in your location. One is “Shoppers Stop” which is at a distance of 150 meters, the second one is “Shoe Corner” which is not currently open, the third one is “Shoe stores” which is at one hour distance, etc.
Do:
User: Tell me a store where I can buy Nike Shoes:
System: There are several shoe stores in the area, would you walk or drive?
Identify errors:
While searching with voice, the user will not be exact with the usage of words hence it should understand the meaning by overcoming the errors. Here are some ways of overcoming errors:
a. Ambiguity: As we mentioned, there can be a single meaning of multiple words. For e.g. “Good” can also mean “Okay”. Hence, the AI must be aware of all the commonly occurring ambiguities for optimum performance.
b. Misspellings or Mispronunciation: In writing, there is no issue of pronunciation but in vocal, there could be multiple accents as well as different pronunciations of words. This can hamper the conversational flow between the user and the NLP.
c. Not providing relevant options: Make sure that the users get appropriate answers and something valuable and relevant.
Test the product:
When the product is on the verge of completion then it is time to test the product. You have to be sure that the developed VUI fulfills every mark of your checklist. There are two ways of testing your prototype:
By Target users:
Create groups of your target audience and then do testing sessions to know how users are observing your creation. You can use this occasion for tracking task completion rates and customer satisfaction scores.
By Test simulators:
Similar to other simulators in mobile application development, Google & Amazon also provide tools for testing the designed product. Test Alexa Skill or Google Action of the product.
Design Guidelines For VUI:
It is normal to have issues in the visual user interface, and the issues are immediately addressed. However, the frustration of issues in the visual user interface is nothing when compared to VUI. So, if your designed voice assistant fails to function well, it would be ditched like a hot potato.
Here are some design guidelines that you can adopt while designing the VUI:
Don’t wait for the users to ask first:
Unlike in a conventional visual interface, users may don’t even know how to interact with a VUI. In that case, you must take a step forward. For e.g., if yours is a voice-enabled weather mobile app, then it can say to the users “ You can ask for the weather today”.
Keep the list of actions short:
If you don’t want to overwhelm the user right in the beginning, then it would be great if you provide only the most appropriate and essential features only.
The verbal content must be concise and full of meaning and easy. As recommended by Amazon for designing Voice user interfaces for mobile apps for Alexa, one must not list more than three options for interaction.
Keep it short & simple:
To make your voice search app stand out, it should be able to understand the voice commands easily. If you are designing the voice search to start a shop floor machine, then the best way to interact with machines is with numbers and then commands.
Users should know that they are being heard:
It is crucial to let the users when they need to speak and when the voice assistant is listening. Google Assistant is the perfect reference for this. There are 3 dots with a wave which indicates that the voice is being heard.
Give the user confirmation:
Just like you need confirmation after doing any transaction, the same is true with VUI. For example, once the user gives the command “switch off the kitchen lights”, your assistant must respond with something like “Kitchen lights turned off”. Now the user doesn’t need to check for the task in person.
Wrapping Up:
In this article, we have given you a technical guide on how to design a voice user interface. At Amplework, the best mobile application development company, we are having expertise in developing some custom functionalities in mobile apps. Please let us know your requirements.