If Slang sounds interesting to you and you’d like to explore the possibility of adding this support to your application.
You’re trying to take that perfect group selfie. You’re holding the camera in one hand, trying to get the shot composed. You’re also fumbling to find that option to enable the 3-second timer. Where is that damned option? Why are the most important features hidden away several menus deep?
How many times have you taught your father to use that banking app to check his account balance? How many times has he gotten confused, navigating through the various options in the app, trying to complete that transaction? We have all been there.
Unfortunately, mobile apps have limited screen real estate — so they create layers of user-interface, which are not always obvious to use. Using fingers to navigate and find every feature can be frustrating.
Speech is 3 times faster than typing. Stanford Research
You know exactly what you want. Why can’t you simply tell your app what that is? Take a selfie with a 3-second timer. Add 2 kgs of apples to my shopping cart. Show me my transactions for the last 6 months.
Better still, you should be able to do all of this in a language you’re most comfortable with. Like asking your banking app: मुझे छह महीने का स्टेटमेंट दिखाओ or ஆறு மாச ஸ்டேட்மென்ட் காட்டு and getting your transaction statement for the past 6 months.
Here’s the clincher. What if we told you that app developers can now add a new voice interface to their apps that enabled all of these possibilities, in just a few minutes?
Our story began a year ago when we noticed how easily our kids took to using voice assistants like Alexa and Google Home, and our parents to voice searches with Google. Voice interfaces are natural. They do not intimidate. Users don’t need to learn something new. Voice interfaces are here to stay and are going to be more pervasive. However, mobile apps continue to be touch-centric. Developers don’t have the right tools today to make voice interfaces available in their apps. They’re limited to building skills or actions which hide behind voice assistants like Alexa, Google Assistant or Siri. These assistants are the primary interfaces and the apps are just completors. The apps themselves cannot be controlled by voice.
Advancement in voice recognition and translation and the advent of language agnostic internet will drive more Indian language users and accessible content online. KPMG/Google Study
Vernacular language users are going to account for roughly 75% of India's internet user base by 2021. A research was done by KPMG and Google which points out that by 2021, nine out of ten new internet users in the country will be a native language speaker and Google search trends show a significant move in this direction as well. This presents an exciting opportunity that can be tapped by voice search to create disruption, especially in a country like India, where people prefer to interact in their language.
This is what we hope to change at Slang Labs, where we are working on building Slang — a platform that would allow new and existing mobile apps to easily add a smart multilingual voice assistant inside them. This would enable the users of that application to interact with it via voice in addition to its existing touch-based navigation.
The advantages for app developers are two-fold:
If Slang sounds interesting to you and you’d like to explore the possibility of adding this support to your application, please sign up below.