What were the key takeaways from Google IO’19?

Being at IO is an immensely rich experience than remotely watching. This is where Giridhar shares his on premise experiences.

A couple of us from Slang Labs got to attend Google I/O 2019 (thanks Google Launchpad for the passes :)) and it was an exciting event to attend. This is a summary of our top takeaways from the event.

At its heart, Google I/O is an annual conference aimed at developers building software using Google’s technologies — such as app developers building Android apps, AI developers using Tensorflow for deep learning and so on. Developers get to learn about new technologies that Google is building, updates to existing products as well as meet some of the engineers working on these products face-to-face and interact with them. The energy and buzz surrounding the event was unreal and is something to be experienced in person.

From the perspective of Slang Labs, the most interesting discussions at I/O were related to the Google Assistant, the focus on voice as a first-class interface and Google’s work on the Next Billion Users.

Assistant and Voice

80% of Users say they would like to engage the brands or third parties via digital assistants.

80% of Users say they would like to engage the brands or third parties via digital assistants.

This year, much of the focus of the keynote was on Google Assistant and the many updates it’s getting this year. The Assistant is now available on 1 billion devices and is leading the charge in bringing voice-interfaces to people around the world. Most interestingly, Google showed off the next generation of Assistant that will be 10x faster, thanks to offline processing. Google seems to believe that fast and responsive voice assistants will move people away from the world of touching and tapping keys on their devices. Google’s next-generation Assistant, however, will likely be available only on the next generation of Pixel phones with support for US English.

Next Billion Users

The new Internet users are going to be mobile-first

The new internet users are different — 32% want to give direct commands to execute an action

Another area that received attention at Google I/O was Next Billion Users — an umbrella project aimed at bringing people from developing countries onto the Internet. There were a number of products built by Google to specifically cater to these users, such as the Google Lens app in Go smartphones which can read English text and translate it in real-time to other languages and overlay it right on the screen. There were also sessions dedicated to helping developers build software specialized for the Next Billion Users.

What was clear from these sessions was that most of the Next Billion Users that will be coming online over the next few years will be mobile-first, have a smartphone and prefer to speak to their devices in their native tongue, rather than click and tap on apps built for English-literate audiences.

Voice and Apps

Digital Assistant across devices — 57% use of Digital Assistant is on smartphones

Digital Assistant across devices — 57% use of Digital Assistant is on smartphones

Given the focus on digital assistants as well as the Next Billion Users who’re predominantly users of Android apps, it would be unexpected to not see specialized focus on voice interfaces for apps.

And sure enough, Google announced that they were doubling down on “App Actions” — a category of Assistant actions that are optimized for apps. They announced built-in support for 4 categories of apps: Finance, Health & Fitness, Food Ordering and Ride Sharing. For apps belonging to any of these categories, integration with Google Assistant can be pretty easily built by developers — after which users can talk to the Assistant and be directed into the app via deep-links.

What was missing?

All of the product announcements at I/O 2019 were exciting and the developer sessions, as well as interactions with the Google team via Sandboxes and Office Hours, were very useful. However, one of the areas where we didn’t find a compelling story was in the area of voice interfaces inside Android apps. While the Assistant was great at getting information without opening apps, and App Actions could be used to jump into apps via voice, there was no way for users to get any work done via voice once they were inside an app. Invoking the assistant again through “OK, Google” and speaking voice commands seems unnatural, and having to switch back to touching, tapping and typing once inside an app seems undesirable in most cases.

Fortunately, this is an area where Slang Labs can help :) Using our platform, which is optimized for building voice augmented experiences for mobile apps, this hole can be plugged so that app users can continue to use voice to get their work done even inside apps with simple integration. To know more, please visit https://slanglabs.in

Conclusion

Overall, we had a very rewarding experience visiting Google I/O 2019. Being at the center of the Google world for 3 days, watching new product launches, meeting Google engineers in person and interacting with other developers from across the world was an incredible experience and we would love to be back again next year. If you are a developer and haven’t visited Google I/O before, we highly recommend going next year!