What Our Clients Need to Know about I/O and WWDC

by: | Jun 12, 2015

It’s been a whirlwind three weeks for the mobile industry. The two most important developer conferences put on by the two most important companies have come to an end. And if you are responsible for mobile at your company and left trying to make sense of what happened at Google I/O and Apple’s WWDC, there’s a good chance you’re overwhelmed by the mass of available information and analysis.

So, let me try to distill it for you: These are the most important four things we’ll be sharing with our clients — and what any business leader thinking about mobile needs to know.

1. Google Now and Siri will enable entirely new app experiences

Google and Apple both announced improvements in artificial intelligence to present more relevant information when mobile device users need it most — based on their context. Through upgrades to Siri and Google Now, this “glanceable” information will come from apps on your device and from the web, based on where you are, what you are doing, what you ask for, or who you are communicating with. The idea is to make devices feel like a personal assistant that really knows us. It’s not a new idea, it’s just that the artificial intelligence that powers these experiences is now good enough to make futuristic visions like Apple’s 1987 Knowledge Navigator a reality.


MORE ARCTOUCH POSTS:


The example Google showed at I/O was when a friend sent product manager Aparna Chennapragada a text about the movie Tomorrowland. She held the home button — launching the new extension named Now on Tap — which immediately brought up links to the YouTube trailer, and icons for the Flixter (showtimes) and IMDb (for cast information) apps. Pretty compelling stuff.

This is an opportunity for businesses to define entirely new app experiences. Mobile apps have always been the only real way to offer up a deeply engaging experience. But now, as the next evolution of notifications, apps can expose layers of information that tie into the Google Now and Siri APIs — putting the information within apps into an “on-call” state for when the user needs it most. An app that offers up this glanceable information not only promises a better customer experience for its users — but like traditional notifications, it will also be more likely to increase use of an app.

2. Home automation is happening (slowly) in three ways

Home automation is advancing, but if you’re looking for a clear leading standard or platform to get behind, you’ll be sorely disappointed.


Businesses looking to break into home automation devices and systems need to consider a proprietary option as the best market opportunity now, but also include HomeKit and Weave/Brillo in their roadmaps.


We have a little more clarity now on where Google is headed with its I/O unveiling of Brillo, a lightweight Android OS implementation to build into IoT devices, and Weave, a communications layer with a common language and semantic definitions for various device types. Apple’s HomeKit was announced last year, and while there are a few products that support HomeKit on the market, you can’t just go out today and buy a multi-product HomeKit solution.

From a whole-home point of view, the only market-ready solutions right now come from high-end AV manufacturers and installers who are creating their own proprietary systems and have been for years. Google, Apple and their ecosystems of supporting device manufacturers may soon see some traction with early adopter users who are willing to try to cobble together their own set of devices and systems for their home — but even by WWDC and Google I/O next year, I wouldn’t expect huge movement.

So for businesses looking to break into home automation devices and systems, they need to consider a proprietary option as the best market opportunity now, but also include HomeKit and Weave/Brillo in their roadmaps because of the promise of the future.

3. We can now make watch apps — for real

Until now, developers couldn’t build native apps for Apple Watch — we could only extend some information to a watch display from a companion app that lived on the iPhone.

At WWDC, Apple released watchOS 2, a major upgrade that allows developers to create native apps that run entirely on the watch. Finally, the era of building real app experiences for smart watches is beginning. Apple’s new SDK gives access to watch hardware including the accelerometer, speakers, digital crown and haptic engine.

Google didn’t focus much on Android Wear at I/O — developers have been able to write native apps for Android watches since the OS was released in 2014. But the market size for most businesses to justify development for watches hasn’t been there. With Apple Watch finally ripe for development, now’s the time to think about new experiences for the wrist — beyond just notifications.

4. Multitasking will force apps to follow UI guidelines — and it divides a user’s attention

Apple ushered in another new era this week when it announced multitasking capabilities for newer iPad models in iOS 9. This includes a slideover mode where apps running in the background can be switched to the foreground.


We’ve been through this evolution before — with computers.


Even more dramatic from a design point of view is the new split-screen mode where multiple apps share the screen, along with a picture-in-picture mode that allows users to view, resize and move a video screen while a different app runs in the full frame underneath.

Sound like a familiar story? It could be because we’ve been through this evolution before — with computers. In 1985, Apple introduced a program called Switcher for the Mac 512K, an early attempt to allow users to gracefully switch between applications. And from there, we now have the ability to have windows of all shapes and sizes open — and several different ways to switch efficiently between them.

Up until now, using mobile apps has been a much more focused experience — where the application takes control of the device and screen. But that’s quickly changing, and you can bet more changes like this are on the way for Android. Which means as app builders, we now need to consider how a user’s attention will be divided.

There is a micro and a macro impact to app development:

  • Micro: For apps to play nicely in this multi-window environment, they need to be designed following Apple’s human interface guidelines so they properly render across Apple’s universe of devices — especially in split-screen mode. If apps currently follow the guidelines, it’s a fairly simple update to enable your app for multitasking. However, many apps don’t meet these guidelines — usually because time and budget pressures drive businesses and developers to a fixed design that works fine for iPhone but won’t scale correctly in other formats. The companies that manage these apps will need to decide whether to do a bigger UI update. It’s not a requirement to support multitasking, but as Apple puts it: “From a customer’s perspective, an iOS 9 app that doesn’t adopt Slide Over and Split View feels out of place.”
  • Macro: I’m concerned about split screens and how it fragments a user’s attention — especially if we slide down the slippery slope and after next year’s WWDC we’re talking about how iOS 10 allows iPhones to show split screens. But the reality is — at least for iPad — that the split screen and multitasking are here. And that means in the early stages of defining an app, we need to think about information hierarchy and UX design that makes sense when a user’s attention is divided.

What now?

It’s still probably feeling like a lot to digest — we understand, and our developers, designers and strategists are busy diving into the details of all the new stuff, learning like everyone as we go. If you’re thinking about what it means for your next mobile project or how your current app might need an update, we’re always happy to help.