Google made a host of announcements this year covering Android, virtual reality and artificial intelligence. Led by Google Lens, artificial intelligence was clearly the star of the show.
Google kicked off this year’s Google I/O conference with a host of product and platform announcements that covered the Android operating system, the Google Play app ecosystem, a standalone virtual reality headset, and new features and services enabled by artificial intelligence and machine learning. While artificial intelligence was clearly the star of the show, let’s begin with an update on the world’s largest smartphone platform.
The State of Android and Google Play
Google announced that there are now 2 billion active Android phones and tablets in the world and emerging markets own a significant share of this installed base. However, many of the devices sold in these markets are resource constrained. This is compounded by bandwidth constraints and inconsistent connectivity prevalent in these regions. To mitigate these factors, Google has announced Android Go, a lighter variant of the next version of the Android OS, that can run on devices with less than 1GB of memory. While this minimizes the OS load on device resources, third-party apps could still have higher demands. In order to solve that problem, Google Play will highlight apps that are optimized to run on these devices and in low bandwidth conditions. In addition, Google also announced Treble — a modular base for Android — which could make it dramatically easier for OEMs to bring the latest Android updates to their devices.
Instant Apps: Now Available to All Developers
Next, Google announced the general availability of Instant Apps. Instant Apps were originally unveiled during last year’s Google I/O, and we theorized that it was Google’s attempt to make app usage more efficient and evolve the app store value chain.
Average app sessions per user on Android phones grew 9% year over year in March 2017.
Average sessions per user on Android phones have increased by 9% year over year, showing that apps are becoming more central to the lives of consumers across the world. However, with millions of apps now available to consumers, discovery remains a high priority for both users and developers. Instant apps should help ease some of the friction of app discovery and encourage consumers to use more apps more frequently by bridging the initial discovery via instant apps and web to native apps. However, as we discussed last year, developers will need to modularize their existing code to support Instant Apps. This is a time intensive process and, as a result, we expect developer uptake to be gradual.
Artificial Intelligence: The New Battleground
Google clearly views artificial intelligence as the next major paradigm shift in computing and is working to align itself to capitalize on this. However, artificial intelligence is simply an enabling technology, much like the internet (a network of networks) was in the mid-90s. Then, it was the world wide web, built on top of the internet, that was truly transformational, giving users instant access to information. Today, Google is running two parallel efforts competing to become a world wide web equivalent for artificial intelligence — voice assistants (Google Assistant) and computing vision (Google Lens).
Launched late last year, Google Assistant was initially exclusive to the Google Pixel. Since then, Google has gradually expanded its availability to numerous Android devices and now to the iPhone as well.
At launch, the voice assistant was described as Google’s vision for the next wave of computing. However, we pointed out that paradigm shifts in computing have followed a tick-tock pattern — with the first shift introducing a new device platform which is then followed by a dramatic expansion of capabilities. There is, however, one caveat — when a device platform has reached critical mass, existing user habits are too well entrenched for a new interaction model to become dominant.
However, many of Google Assistant’s capabilities can be used with a multitouch interaction model as well. We believe there is immense potential here, particularly when combined with Google’s next announcement.
In many ways, Google Lens was the most intriguing announcement at the event. Using Google’s machine learning and artificial intelligence capabilities, Google Lens can recognize objects in your camera lens in real time and provide helpful information or take action. For example, users can point their camera at a WiFi router’s login details to connect to a network, or at a real world product or location to get more information. Since the actions are on-screen and smartphone cameras are already used everywhere (as Pokémon GO proved), this does not require entirely new behavior from consumers. In addition, it clearly reduces friction in searching for information about the real world. In fact, it could accurately be described as visual search.
The rollout of Google Lens will be gradual over the course of the year, starting within Google Photos and Google Assistant. However, at some point, it is likely to become part of the camera app itself, with APIs for third-party app developers as well. We believe vision capabilities will lead to a new wave of developer innovation and multi-billion dollar startups, just as location capabilities did. But the potential of computer vision extends well beyond the current computing era. Much like the internet and the world wide web paved the way for the next era of computing devices (mobile), we believe artificial intelligence and computing vision could pave the way for an era of “field of vision” devices (augmented reality).