9 things developers and businesses need to know about Google Home

by: | Feb 9, 2017

During ArcTouch’s recent third-annual Hackathon, we learned a lot about some emerging IoT platforms. From chatbots and wearables to home assistants and augmented reality, we dove into the IoT world around us in an effort to understand how businesses, developers and users can benefit from these new platforms.

Google Home was the focus of a few of our projects, for good reason. It’s among the newest of IoT platforms, having been released late last year. And when you start to think about what it will take to make Google Home, or any home assistant, a mass commercial success — advanced A.I., machine learning and contextual awareness — Google is well positioned to make it happen.

If you’re heading up a business or product line thinking about how to engage with consumers in the home, the time is now to start learning about Google Home (along with its rival home assistant, Amazon Echo). And that’s exactly why we got our hands dirty and hacked up some proof-of-concept apps (or “actions” as Google refers to them). Here are nine things we learned in the process:

1. You can learn a lot about the universe while you wash your dishes

Personal chatbots may not be a reality for everyone in 2017, but as ArcTouch founder and CXO Adam Fingerman wrote in his recent Law and Order in a Bot-to-Bot World article, they are coming in a “big, pervasive, omnipresent, life-changing sort of way.” We think those personal bots will extend to voice assistants. But adding a tube-shaped, internet-connected speaker to your living room probably won’t be the only change.

Homes will be designed with built-in voice assistants, from smart appliances (“OK Google, start the coffee maker”) to entertainment (“OK Google, find me the top kids movie from last year and play it”) and beyond. A home assistant offers up the potential for users to engage with content without seeing or touching any screen. This frees up your hands and eyes to do other things — like cooking, where a voice assistant could help guide you through a recipe, a concept one of our teams explored in our food-themed hackathon. Or, even while you’re washing your dishes, you can learn about the universe with podcasts and or any information your personal assistant can find in the cloud.

2. It is just a passive assistant, at least for now

Google Home is, at least for now, a simple question and answer device. Meaning, it won’t proactively make suggestions based on what it knows about your “context.” The user always starts the conversation. It would be nice to have triggers for the voice assistant to ask the user if there’s a good time to make an action, but that’s not possible for now. Unlike phones and tablets, Google Home (and Amazon Echo) can’t send reminders or notifications. For now, there is at least an alarm feature developed by Google — but its infrastructure is not yet documented or available for developers. It’s also limited. You can ask Google Home to “wake me up at 6 a.m,” but if you want to add a condition to this alarm, e.g. “Wake me up at 8 a.m., but if it’s raining wake me up at 6 a.m.,” you can’t.

3. Google Home is always listening, sort of

Google Home is always listening for the “OK, Google” command, ready to be activated. During the recent Super Bowl, an ad for Google Home triggered the devices for owners who had it in close proximity to the TV. This always-on state has generated buzz in industry circles about privacy and whether Google is storing every nearby conversation that takes place. And if it is, how might Google use that information? According to the company’s privacy policy, Google Home “listens in short (a few seconds) snippets for the hotword. Those snippets are deleted if the hotword is not detected.” After you engage Google Home with “OK, Google” your privacy settings will dictate what information Google collects. That information may be used to build your online profile, and of course, that online profile becomes highly valuable for services trying to reach you via advertising with targeted offers.

4. Google Home actions can be continuously listening, too

One of the most interesting things we learned about Google Home during the hackathon was that individual actions can remain in an active listening state — indefinitely.

For example, one team in our hackathon built an action called “Give Me a Bottle,” which makes recommendations for wine or beer that pairs well with specific food. A user could engage the action starting with, “Ok, Google. Give Me a Bottle.” After a welcome message from Give Me a Bottle, the user would be asked a series of questions about what food they were pairing and the user would respond appropriately. Even after this back-and-forth dialogue is complete, the action may remain in an active listening state — if the developer chooses to leave it on. Almost like a traditional computer program, the action remains open and is listening for keywords that would prompt a response. It only turns off if the user commands the action to close with a phrase like, “Close Find Me a Bottle.”

Alternatively, developers can choose to set actions to only engage with the keyword, “OK, Google.” And after the initial question and answer, the action would no longer be active until the user says “OK, Google” again.

There are user experience trade-offs for each approach. If developers code the action to remain in a listening state, users may forget to close the application. And nearby conversations that take place later may accidentally trigger a response.

However, if the action has to be repeatedly engaged with “OK, Google,” it makes a longer sequence of dialog much more cumbersome.

5. Google Home has issues with accents

ArcTouch has a large team in Florianópolis, Brazil. During the hackathon, Google Home had a hard time understanding the accent of our Brazilian team members — despite the fact that our employees are fluent English speakers. This isn’t a new issue — voice apps on phones and tablets have a history of poor performance when it comes to understanding different accents. It’s just that this known issue is going to become even more challenging with Google Home, Amazon Alexa and other voice-only interfaces, where there is no touchscreen available as a backup. One of our hackathon projects designed for Google Home could help staff organize and manage dish orders in a professional restaurant. Could you think of a place with more ethnic diversity than a restaurant kitchen? We think Google is poised to solve this problem, given its history and expertise with data and machine learning.
google home action hackathon project

6. The backend is pretty much ready for developers to use

This has been a nice surprise for the developers getting an early start on Google Home: API.AI, Google’s service that allows speech-to-text and other artificially intelligent systems to be built and customized, is really powerful. All the important information is well-documented and the deployment of the apps is smooth. The whole action can be built through API.AI — and you don’t need to train the agent using complex algorithms (Google already implemented that for you). You don’t even need to configure your own server as long as your action doesn’t make too many requests.

7. But you might need to know more than just coding

A big part of building any voice app is writing the algorithms used to train an agent to understand natural language as it applies to specific subjects. For example, how do you get Google Home to understand key terms about topics ranging from food to football? Google has some domains mapped and available for anyone to use. But it’s fairly limited right now, as you might expect. Eventually, AI-driven bots might be able to go off and educate themselves on different topics. But until then, it won’t be enough to know how to develop the bot; you’ll need to be an expert on the subject your action is addressing, too.

8. Is Google Home better than Amazon’s Alexa?

Maybe. Google Home has the advantage when it comes to user context. The integration with API.AI makes some things easier, for example leveraging synonyms for key words, which ends up making a huge difference in the final user experience. Alexa and Google adopt W3C’s SSML (Speech Synthesis Markup Language) — it is to voice apps as HTML is to the web — in which the tags make conversations more natural. Right now, Alexa seems to work better with different accents, but Google Home’s voice seems more human and less robotic. Google Home also seems to be more sensitive to picking up voices, hearing from further distances in comparison to Amazon Echo. And as a developer, what if you want to support both platforms? Unfortunately, it’s not that simple. You have to create a thin layer for both and have all the business logic stored in a server.

9. The future is smart and you may not need to set your alarm

As we mentioned before, Google Home is now an interface that responds to the user and to other devices, but it doesn’t suggest or proactively engage a user. In the near future, we envision Google Home being able to start and hold complex conversations. Google Home will achieve its purpose once it becomes a “home concierge,” knowing your likes and dislikes, customizing itself based on your behavior and context.

How cool would it be if Google Home, powered by the Google Assistant, could make smart decisions on your behalf? Instead of you setting the alarm for a specific time, for example, Google Home could understand your morning routine, and calculate how much time you can sleep based on the traffic conditions on that day. It’ll wake you just in time so you won’t be late for work. That type of smart assistant might be coming sooner than we think.