Google I/O 2016Reading time: 9 mins
The latest and greatest from Google I/O
Google released a ton of new products and feature updates at the I/O conference and although we can’t go through all of them, here are the major highlights.
This much anticipated release doesn’t have a name yet, but Google has been showing it off in several public releases. There are a lot of new features with multi-tasking mode being the major one. The notifications shade has also been updated with direct reply within the notifications bar and bundled notifications to prevent cluttering from mutliple applications. The Settings menu and layout have also been updated to display at-a-glance information without having to go into a particular option. There are new keyboard themes available with the stock keyboard from Google and customizing it, this feature is simlar to the customization offered by SwiftKey.
Under the cover, there are two major performance updates: The adoption of Vulkan as the graphics API which will deliver efficient visual performance on phones for mobile gaming and the new JIT compiler which will reduce the install time for new apps. We should start to see more features or updates coming out in the summer. In terms of naming, Google is opening this up to the public and you can contribute a name to Android N too!
This is the new Android N-powered VR platform that doesn’t require an expensive headset or a gaming pc, just your smartphone. It seems to be the successor of Google Cardboard, but where Cardboard worked with just about any device, Daydream will require the next generation of Android N devices built with special sensors and screens. These new devices will provide the display and computational power needed for Daydream, but users will also need to get Google’s new VR headset and a controller.
In addition, the Unreal Engine will natively support Daydream. This will really help accelerate the development of VR-based games for smartphones. Unity engine integrations are also coming later in the summer.
A competitor for Amazon Echo, Home is a voice-activated home product that allows you and your family to get answers from Google. It will integrate most of Google’s services such as YouTube for media and Google Play Music for personal libraries.
Allo and Duo
Two new messaging apps debuted at I/O and both integrate the next generation of smart messaging with in-context understanding and search capabilities. Allo is a messaging application that lets you use Google Search in-app for searches, smart replies using AI for simple suggestions, end-to-end encryption of messages and using the voice based Assistant that we’ll talk about shortly.
Duo is a simple one-to-one video calling app and you can call anyone from your phonebook. It’s optimized for dealing with slow connections and seamlessly transitions from WiFi to cellular data as necessary. Both Duo and Allo work based on your phone number.
Assistant and Chat bots
Google Assistant is a conversational successor to Google Now, but it is embedded within applications. For instance, in the Allo app that we covered above, Assistant takes the role of a chat-bot that answers your questions in-context and provides you with information as you need it. This is what Google calls a two-way conversation. In Google Home, it takes on the form of taking instructions from the user and executing them, a one-way conversation. Assistant will also power other features on smartphones such as Now on Tap and Search.
Google is also planning on launching chatbot APIs for various messaging services by giving them access to Google Now functionality. We’ll see in the upcoming year how Assistant takes on the role of powering new chatbots across various applications.
Eventually, my guess is that the most of stock Android powered will be powered by one-way or two-way conversations with advanced AI, like Assistant. This makes a lot of sense for a company that’s making deep investments in machine learning to create an inherently smart and user-friendly operating system. Sundar Pichai put this in his own words:
We think of it as building each users its own individual Google
Android Wear 2.0
Several new updates to the Wear app and the watch interface. The new update is designed to make the watches independent on the phone to a large extent and still be functional. The smart message replies from Inbox are now available with Wear to make conversations easier on the watch. There’s a new launcher for applications and a new overall dark interface that can be used with Wear. Google has also expanded the Fit platform to the watches with automatic activity detection for strength training exercises, such as bicep curls and deadlifts.
To me, this was perhaps the most exciting feature that Google released. Instant Apps are pieces of a larger app that handle a very particular function. The best example of an instant app can be just the shopping cart function that allows you to purchase an item on Ebay without having to download the whole Ebay app. This new feature allows developers to split existing apps into several smaller modules, each suited to a particular task.
Instant Apps are essentially functions of the services they provide. The technology underlying Instant Apps takes advantage of deep linking to inter-connect applications and make the transition from one app to another seemless. In the shopping card example, the user can transition from Chrome where she searched for a particular item and then the instant app can take her to purhcase it through Ebay or Amazon.
Tensor Processing Unit (TPU) for advanced machine learning
Custom chips for Tensor Flow by Google that power machine learning tasks. This hardware is specific to processing particular tasks that use tensor flow algorithms.
Google Family Library
Share Google Play Purchases amongst 6 family members through the Family Library.
Android apps running on Chromebooks
Chromebooks are largely powered by applications from the Chrome store which hasn’t been doing that well. There are plenty of apps on the chrome store but that ecosystem has grown stale. Play Store is becoming the next major development because of Android and it makes a lot of sense to be able to run Android apps on Chromebooks - This will also allow Chromebooks to bridge the gap between running Office on cheap laptops.
Google Ara’s modular phones
The modular phones may become available later this year for developers. Each device supports up to six hot-swap modules that extend functionality such as extended battery, high-end camera, blood glucose monitor.
Google Soli, Project Jacquard and more
There are a few divisions in Google that incubate experimental technology into commercial products. One of those divisions is called Advanced Technologies and Projects (ATAP) that are working on two prominent projects: Soli and Jacquard.
The first one, called Google Soli is a new effort to create radar-enabled consumer electronics and the team building it has a new target: Putting the radar in gesture-enabled smartwatches
Project Jacquard is another initiative by ATAP to create technology enabled clothing. This time around, Google’s ATAP is parterning up with Levi Strauss to create a commuter jacket that will be using ATAP’s technology. It isn’t clear yet what the purpose of the radar-based tech would be or if the jacket will have something else entirely, but the jackets will be available in spring 2017 and a beta launch is coming out this fall.
There’s a third project to come out of ATAP which won’t be available till later this year called Project Abacus. This project aims to replace passwords by using several personalized metrics (such as your typing speed, walking patterns and so on) to create a trust score, which can then be used instead of regular passwords. Right now, Google is beta testing this and the API to use create trust scores and use them widely throughout Android applications will be available shortly.
That was already a very long post and I still didn’t get to cover other minor APIs that were released such as the Awareness API, the Google Play Awards and the rich search cards that are rolling out on mobile.
So what can we expect next year? I imagine a host of improvements on VR and we should start to see Assistant take a better hold of Android stock applications. Instant apps will be a hit, but there’s only so many shopping cart apps that devs can make, they might hit a plateau - Only a few of those will become acutally useful. I’m sure Google will put out more regarding those TPUs, maybe show us an implementing of something close to a cluster-farm made from the TPUs. I’m very excited about new chatbots that will be coming out based on the Now functionality. I think the idea of an integrated mobile system where a user’s focus can seamlessly transfer from one window to another is becoming more and more a reality with Android N.