10 Lessons We Learned from Our 2017 Hackathon

From building voice assistants to chatbots, wearables and the IoT, the ArcTouch design and development team shares lessons from its two-day 2017 hackathon.

5 min. read - January 18, 2017

By Troy Petersen

By Troy Petersen

people working
people working
people working

One goal for any successful hackathon is to learn. And that was our mission last week, as our entire team assembled in our San Francisco and Florianópolis, Brazil offices for our third annual hackathon. And learn we did. Sure, we consumed gallons of coffee, ate from a seemingly endless supply of food provided by local eateries, and enjoyed some adult beverages along the way. But we unequivocally gorged ourselves on technology and geeked out on solving user and business problems.

Unlike past years, this year’s ArcTouch hackathon had a theme and a clear parameter. Our 11 project teams were challenged to build digital products around the theme of food — for emerging platforms such as home assistants, AR, wearables, or other IoT beyond traditional “mobile apps.”

So, we hacked. We built. We solved. And along the way, here are 10 of the many lessons we learned (or were reminded of):

1. Building cross-platform voice-based apps is a familiar challenge

A few of our teams focused on the emerging voice assistant opportunity, and for good reason. Hands-free voice interaction can solve real problems for certain types of apps, like when you’re preparing food in the kitchen and interacting with a touch screen doesn’t work because your hands are covered in the ingredients for your next meal. But when it comes to building cross-platform apps for the two major home assistant platforms, Amazon’s Alexa and Google Home, it’s a familiar conundrum. On the surface, the platforms seem similar. But the deeper you get into the code, the more divergent the applications become, much like the iOS and Android mobile dilemma. It points to an opportunity for a third party (like Xamarin did for mobile apps) to build a bridge between the two platforms.

2. Creating an integrated experience across text and voice-based platforms requires duplicate effort

Creating cross-platform apps for different voice assistants — e.g. Google Home and Alexa — is challenging. Building an integrated experience that includes voice and text-based applications in a two-day hackathon is nearly impossible. As one of our teams found out, extending a voice-based app from Alexa to a text-based chatbot on Facebook Messenger basically meant building two entirely different apps. There was very little code or design that could be shared — though there are services like Amazon Lex, currently in beta, that may help developers extend their apps to both voice and text platforms in the future. 

3. Designing voice/text interfaces are more complex than designing graphical interfaces

There’s a perception that chatbots or voice assistants don’t need design, that they are just code and a lot of scripted dialog. This couldn’t be further from the truth. True, there are no screens to visually design. But when it comes to imagining the user experience of a conversational app, the permutations of how many paths one can take is daunting. Think of it this way: There are only so many touch-based interactions you can have with your smart phone. But there are so many things you can say to a voice assistant. It might just take more skill to carefully manage and design an engaging conversational user interface flow than a graphical user interface. To that end…

4. Understanding voice design will be an increasingly important skill for developers

The title of “voice user interface designer” may soon start to appear atop job descriptions and LinkedIn profiles as building applications for voice-driven interfaces becomes increasingly popular. Amazon, as you might expect, is leading the charge here. It’s published a “Voice Design Best Practices” guide for developers of voice-based conversational apps. Conforming to a platform’s best practices will be important, of course, but it’s equally important for companies building Alexa skills and Google Home apps to make sure their brand voice comes through, too. One of our hackathon teams attempted to build a product in which Alexa was more casual with her language to try and make her feel less stodgy and more human. Results on our project were mixed. Moving forward, negotiating that intersection between the brand voice and platform best practices will be a valuable skill.

5. Lack of notifications on Alexa and Google Home is limiting

Imagine having a relationship with someone who only talks to you when you first talk to them? That’s not really much of a relationship. Alexa and Google Home, at this point, don’t have the ability to proactively send reminders, notifications or otherwise start a conversation. Mobile phones and tablets offer developers the ability to send visual and audible notifications (a.k.a. push). Developers, of course, often overuse notifications on phones, much to the chagrin of their users. And there’s even more potential for misuse of voice notifications in the home. But in order to become a truly useful assistant, platforms like Alexa and Google Home will eventually need to be able to initiate a conversation. Think: “I noticed it’s 80 degrees in your upstairs bedroom. Would you like me to turn on the air conditioner?”

6. Google Home might become a great platform … but it’s early

Google Home just launched in November, so the hackathon was a good chance for some of our teams to flex our dev muscles on a new platform. However, Google is taking a very measured approach to building its developer ecosystem — and there is very little in the way of documentation available (though Google has promised it’s coming soon). Despite the lack of information available, our teams were up to the challenge, and we managed to learn a lot about Google Home. Stay tuned for more on that soon.

7. Even in a hackathon, starting with a user problem is a must

As I mentioned earlier, the mission of our hackathon is to learn, and specifically learn about new technologies we can use for our clients’ projects. Despite the fact that technology headlined our hackathon, addressing user problems was at the core of the products we built. More so than our past two hackathons, our product teams delivered focused products that we think have some potential. And our teams, comprised of developers, designers and business people, really rallied around those problems. When you have two days to build something, this provided the focus we needed to avoid distraction.

8. Chatbots are only as smart as their back-end

The potential for truly smart chatbots is enormous — but digital denizens need a brain, and more specifically a memory. One of the limitations for current chatbot platforms is that the bot’s memory is limited to what’s in the chat thread. But what if you have a bot that could live in multiple places and collect information across threads? The potential becomes so much greater. That data — the brain/memory — has to live somewhere. Also, this served to remind us that user context — all the information about us that might help make for a smarter assistant — lives outside the application. There’s no doubt, that tomorrow’s chatbots must break through the walls of a single application to deliver the kinds of intelligent experiences we’re expecting. This challenge isn’t exclusive to bots — many phone apps run in isolation, unable to communicate with other apps or take advantage of system-level user information that might make these experiences smarter.

9. Cucumbers and zucchini squash kind of look the same

One of our teams built an augmented reality app that allows you to scan different ingredients and then suggests meals you could cook based on what you have on hand. Part of the technical solution required using a real-time image recognition API, and we tried Google Cloud Vision, IBM Watson and Amazon Rekognition, among others. For the most part, they worked great. That is, until we scanned a cucumber and the app thought it was zucchini squash.

10. Don’t reinvent the wheel

Two days goes fast. As much as our engineers love to write original code, what makes them great at their jobs is they know about tools that can help them get stuff done fast under duress. Specifically, using existing platforms, back-ends and APIs is a great path to building a proof of concept in two days. Just one example: One team found a video platform called GoAnimate that made it easy to design a fully animated tutorial to help kids learn how to cook eggs — in two days!

Subscribe for more insights

Get our newsletter in your inbox.

By subscribing you agree to our privacy policy.

Contact us.

Let's build something lovable. Together.

We help companies of all sizes build lovable apps, websites, and connected experiences.