Google Extends Third-Party Actions to iOS, Android. Here’s How to Build One
One of the loudest cheers from developers at last week’s Google I/O keynote came from the announcement of Kotlin becoming a fully supported language for Android (when we at ArcTouch compared Kotlin vs. Java, we got pretty excited, too). But there was a lot of other news that will have a bigger impact on our industry and our clients. One bit in particular caught my attention: Google Actions can now be developed for iOS and Android.
Just as Amazon has extended the reach of Alexa by licensing it for devices besides Echo, Google’s move gives developers the ability to create voice-based third-party applications for Google Assistant on billions of mobile devices. And like we wrote about recently with Alexa, Google Actions being unleashed from Google Home means the potential use cases for brands and businesses thinking about voice apps grows dramatically.
At I/O, Google also launched the Actions Console, its new development and documentation hub for Actions on Google — a much-needed resource that was unavailable to our teams when they built some proof-of-concept apps for Google Home during our hackathon in January. I had previously experimented with developing a skill for Alexa, so I was curious to see how the experience compared.
The Actions on Google landing page provides a high-level overview of what it takes to build a Google Action. It suggests you can build an app in 30 minutes. I, of course, took that as a challenge. So, I began building a Google Action for my Android phone.
Getting started with Google Actions
As with the Getting Started guide for Alexa, Google’s guide involves building a “Facts about Google” app. This exercise is like the “Hello World” for the voice apps world: It will have very basic functionality and just a limited amount of static data. However, this fact-based app provided me a simple way to try things out — and even do some basic customization, so I could at least imagine some element of creativity.
I began following the step-by-step guide, but I soon decided to ditch the 30-minute challenge. In retrospect, if I’d blindly followed each instruction, there’s no doubt I would have completed it with a few minutes to spare. But I wouldn’t have learned much.
Instead, I decided to take each step slowly, to tinker with the logic and see different outcomes as I went, and to better understand the impact of was I was actually doing.
Creating an API.AI Agent
Once an Actions on Google project has been created, and its metadata is established, the first significant task is to create an API.AI agent for it. This is what defines the conversational user flow through the app — i.e. what phrases trigger each route through the app, and how the app should respond to those triggers.
Those triggers are known as intents and, for the sample app, included the following:
Defining functionality in an Action
The next significant step is referred to by Google as “deploying fulfillment,” which is a fancy way of saying you’re defining the app’s functionality. This logic is called by the API.AI agent when it is triggered by an intent. It’s very similar to the lambda function that we covered in our popular post about how to build an Alexa Skill.
Node.js is Google’s preferred development language for Actions. Although that’s not a language I have much experience with, it was fairly easy to take the code in the sample app and modify it to my own chosen set of facts, themes and image URLs.
The deployment of the functionality is where things started to diverge from my experiences with Alexa. I had to install a couple of packages on my local computer and perform command-line operations to complete the configuration and upload the functionality. It wasn’t necessarily more complicated than Alexa, just different. I could have used a local deployment mechanism, which would have made it much quicker to deploy and test.
Previewing and testing the Action
Once the app is deployed, it’s time to preview and test it. This part is similar to what I experienced with Alexa Skills in that Google also uses a web-based simulator. This allows you to interact with the app either through voice or text input, and to read or hear a response. The simulator includes the request and response, in JSON form, to aid with debugging.
This all takes place without any production deployment — but it is still possible (and recommended) to test on an actual device. You can test it on Google Home or a phone, and is done using a test wrapper for the app. I was excited to test it on my Android phone (see below) — since building Actions for mobile devices wasn’t possible until last week.
During my testing and investigations, I realized that the API.AI agent isn’t specific to Actions on Google. It’s platform-agnostic — and on the Google integrations page, you can see where else it can be used:
And so the era of Actions anywhere begins
So, I tested my app, it worked, and my mission was complete. I even got the ROI out of this project that I was after. No, my little test app isn’t worth a penny to anyone outside of ArcTouch. But I learned a lot. Most importantly, I verified that it won’t be hard for experienced developers to build Actions for Google.
Of course, just because it’s relatively simple to build a basic Action, that doesn’t mean it will be easy to create a great one. Nor will it be easy to build one that both solves a user problem and offers up business value. That’s where great app strategy and user experience design come into play.
But in the wake of Google I/O, we expect many companies that have been in a wait-and-see mode on voice apps will now jump in headfirst. The opportunity to reach billions of mobile users with Actions for Google and the Google Assistant makes for a pretty compelling business case.
OK, Google, build me a voice app
If your business is thinking about creating an Action for Google or an Alexa Skill, we’re happy to help. Even if you’d simply like feedback on your idea — or want some ideas from us — let’s set up a time to talk!