During its recent event, Google stated its goal to build a “personal Google” for everyone.
Last week, Google announced a portfolio of “Made by Google” hardware that included Pixel smartphones, a Daydream viewer for mobile virtual reality, upgraded Chromecasts and the Google Home voice-activated speaker. However, the underlying theme of the event had little to do with hardware and everything to do with Google’s vision for an artificial intelligence-centric computing world. Google Assistant, embedded in both Pixel smartphones and Google Home, is the first manifestation of this vision.
Google’s CEO, Sundar Pichai, kicked off the event by explaining this vision. He stated that the computing world has seen paradigm shifts roughly once every decade, starting with the PC becoming mainstream in the early-to-mid ‘80s. This was followed by the web in the mid-’90s and the current smartphone revolution that kicked off in earnest around 2007. This puts us at around the right timeframe for the next paradigm shift, in which Google believes voice-activated virtual assistants will become the primary way to interact with online services. Google’s vision is bold and ambitious — to say the least — and will be challenging to realize. This becomes clear when we look back at the elements that drove previous computing paradigm shifts.
Tick-Tock Pattern in Computing Paradigm Shifts
If we look at the last computing revolution, it was characterized by a tick-tock pattern. In the “tick cycle”, we saw the introduction of a new computing device platform (i.e., the PC). With a point-and-click interface, this device had an entirely different interaction model compared to previous computing platforms. In addition, it also brought computing to new contexts, symbolizing Bill Gates’ vision (“a computer in every home and on every desk”). These elements catalyzed software innovation, which kicked off the personal computing era. If we look closely, these elements also explain app innovation on smartphones (multitouch interface and mobile context) and the lack of “killer apps” on smartwatches (similar interaction model and context as smartphones).
Looking at the last “tock cycle”, we saw a dramatic expansion of PC capabilities as the web enabled instant communication and remote software distribution. However, this paradigm shift was built on top of the existing interaction model and computing context. Web browsers mostly behaved like PC applications, as users interacted with them using the existing point-and-click interface and computing remained indoors. This was no accident as it is extremely difficult to change established user habits when patterns around a product have already been established. For instance, consider the uphill battle of mobile payments against “good enough” debit and credit cards.
Voice Interaction Versus Established Patterns
There is no doubt that artificial intelligence and contextual computing has the potential to reduce friction and ease discovery. However, shifting to a voice-dominant interaction model requires consumers to change long-established habits around multitouch interaction.
This does not mean that voice interaction does not exist today. Google has previously stated that 20% of mobile queries are now voice queries. If we assume that each query can be equated to a session on the Google app, 20% of sessions within the app would be voice activated. Including all sessions of voice assistants like Cortana, Dragon Mobile Assistant and Hound, voice interaction would only constitute 1.2% of monthly sessions on Android smartphones in August 2016. Optimistically, even if we add 20% of all sessions onGoogle Chrome, voice interaction would just comprise 2% of monthly sessions. Of course, this percentage is likely to grow as voice recognition and the capabilities of Google Assistant improve. But it is difficult to see this becoming a primary interaction model for a device that is used everywhere and often around other people. Of course, this argument changes if voice-enabled speakers like Google Home and Amazon Echo are viewed as the next “tick cycle” —but the scale of these products is unlikely to approach smartphones for some time.
While Google has visualized voice as the dominant interaction paradigm, Google Assistant’s capabilities extend beyond that. The company’s stated goal is to build a “personal Google” for all users. This makes Google Assistant a repository for personal preferences and habits, which can be leveraged to reduce friction and ease discovery of apps and online services. Pulling up the assistant in most contexts also brings up suggested apps to use next, which is to say it achieves similar goals without relying on voice interaction. Upcoming integration of third-party apps within Google Assistant can only make this more powerful.
Of course, many mobile services will still require an active user interface. In these cases, artificial intelligence can only go so far to reduce friction if a link redirects users to an app store download page. Instant Apps, another major initiative announced at this year’s Google I/O aims to solve this problem.
October 10, 2016Mobile App Strategy