Multi-modal design with Google's Daniel Padgett
Podcast |
VUX World
Publisher |
Kane Simms
Media Type |
audio
Categories Via RSS |
Arts
Business
Design
Marketing
Technology
Publication Date |
Jun 22, 2020
Episode Duration |
00:50:07

Sponsored by Project Voice Catalyst

Project Voice Catalyst is for companies working with voice and AI, whether heavily involved or just starting, it uses an extensive network to connect companies working with voice tech and conversational AI to new customers, partners, media, or investors, depending on needs and business objectives.

Accelerating your business months ahead of where you otherwise would be.

No matter what industry - publishing, healthcare, automotive, banking, gaming, hospitality - Project Voice: Catalyst is helping others and can help you.

Contact Ray Kyle, Score Publishing's Director of Business Development, at Ray@ScorePublishing.us or (781) 929 1098 if you're interested in learning more.[/vc_column_inner][/vc_row_inner][/vc_column][/vc_row]

Multi model design for Google Assistant

We first spoke about multi modal design with Jan König of Jovo on one of the very first episodes of the VUX World podcast. Back then, Jan described Jovo's vision for a multi modal future, where the best interface is the closest interface you have to hand, whether that's your watch, your headphones, your speaker or you phone. And that the experience you have with your assistant should depend on the device you're using. Context should be carried across devices and modalities so that your experience remains personalised, yet tuned to the device you're using.

In 2018, this was merely a vision. Google Assistant existed on Android and in a smart speaker and almost all design was contained to the audible conversation.

Since then, Google Assistant has exploded. It's on over 1 billion devices of all shapes and sizes. Yes, it still runs on Android, and on Google's line of Nest smart speakers. But it's also now on iOS, on Nest Hub smart displays, car head units, headphones, smart home objects, watches, TVs, all in over 30 languages. And it's expanding into new environments with new languages seemingly every couple of month.

Jan's vision has been brought to life by Google.

How, then, does Google make sure that the experience of using Google Assistant is consistent across device types? How does a screen change the dynamics of the interaction? How does the context of someone being outside wearing headphones impact design choices? And how should the experience differ and grow over time?

Then there's the fact that Google doesn't control where Google Assistant lives. Any manufacturer can put Google Assistant into any device and potentially create new contextual environments and new multi modal dynamics. How do you manage that?

Daniel Padgett, Head of Conversation Design at Google, joins us on the show this week to explain.

Links

Conversation design guidance from Google

Google's design principles at design.google

Books

Wired for speech by Clifford Nass and Scott Brave

The man who lied to his laptop by Clifford Nass

Designing voice user interfaces by Cathy Pearl

Voice user interface design by James Giangola and Jennifer Balogh


Hosted on Acast. See acast.com/privacy for more information.

This episode currently has no reviews.

Submit Review
This episode could use a review!

This episode could use a review! Have anything to say about it? Share your thoughts using the button below.

Submit Review