How UX Makes Us More Human: ConveyUX 2017

(This post was originally published on L4Digital.com in March of 2017)

My ConveyUX adventure began as I elbowed my way into a crowded hotel meeting room, listening to Marianne Sweeny present on “The IA of AI,” a deep dive into both the history of AI and its future, rife with suggestions for future reading. The next three days were jam-packed with sessions, networking, and great conversations with designers, researchers, and others affiliated with or interested in UX. Topics leaned heavily toward new(er) technologies and interfaces: augmented reality (AR)/virtual reality (VR), conversational user interfaces (like Apple’s Siri), and also included data/AI, healthcare, and tactical explorations of design artifacts and processes.

As I moved from session to session, I noticed certain concepts kept popping up; presenters repeated words like “play” and “context;” there were discussions of the intermingling of the physical and the digital; and everywhere speakers rallied for deeper collaboration between technologists/developers.

UI & Our Future

While listening to a researcher talk about designing conversational UI for Amazon’s Alexa, it struck me — as we’re discovering how to make machines behave in ways that are more human, we’re starting to uncover and explore more deeply for ourselves what it means to BE human.

When talking about artificial intelligence, and what it means even to be intelligent as a human, much less artificially, Marianne Sweeney asked the question: “Is physical embodiment necessary to our experience?” As with any really good question, there’s no one, true answer, but I would say, “Yes.”

Sitting at a computer screen, moving only our fingers and our eyes as we type and click, it can be easy to separate the “mind” from the “body” and assume that one has no bearing on the other. However, studies have shown that physical movement actually helps the brain function, hinting at a deeper interconnection between thought and experience. How, then, will we be able to “teach” our algorithms to be humanly intelligent without that type of physical input/stimuli?

Where Digital & Physical Worlds Connect

The most visible interconnection of the digital and physical worlds happens with augmented reality (AR) and virtual reality (VR). Dr. Annemarie Lesage introduced an alternative to these types of spaces: mixed reality (MR). She landed on the phrase because of her experience working at Hybridlab (Universite de Montreal) with the Hyve-3D , a mixed-reality design tool that allows designers to collaborate on design concepts using 3D projection and tablets.

MR differs from AR in that there is direct physical control of the digital portion of the experience (albeit in this case through a tablet interface); and it differs from VR in that the people using the technology are still aware of their own bodies in the space.

What’s fascinating is that this technology, developed at a university and purchased by corporations, is also used in Montreal for group experiences like concerts and dance parties. This highlights one of the biggest insights from utilizing MR: immersion in an experience is not about being physically immersed or surrounded, but is rather a consequence of the quality of engagement you have with the experience. And the experience is much more engaging when it is connective (social and physical) rather than separate or isolative.

And finally, conversational UI is driving us to think more deeply about how we as humans use and understand language, because the capability to create realistic dialogue is crucial in building a conversational UI that is effective, delightful, and not creepy. Phillip Hunter, UX Design Manager for Alexa Skills at Amazon, actively encouraged designers and developers working on these types of tools to learn playwriting/scriptwriting tools and techniques to hone their ability to make better voice products (that roar of approval you heard was all those theatre and film nerds like me who can finally tell our parents, “Look! I’m using my theatre training AND making money at the same time!”).

Building a Better Future Together

Amber Case is a cyborg anthropologist, and also my new BFF. Seriously, I have proof. We first met when I asked her to sign my yearbook a copy of her newest book, “Calm Technology,” partially because I thought she shared some great insights in her talk “Designing for Privacy in Mobile and Web Apps,” and partially because because “Cyborg Anthropologist” ties with “Special Agent in Charge” as the second coolest title ever (the first, obviously, being Supreme Allied Commander).

We’re at a crucial place in time: we’re developing technologies that previously only existed in science fiction, technologies that start to meld what is human with what is machine. And we’re developing them fast, perhaps faster than our limited brains can actually process them. As Case says, “Humans have a metabolism rate for features;” swap out “anything new” for “features” and this still rings true.

The insights and observations shared at conferences like ConveyUX remind us that we can’t let our excitement for the machines blind us to the capabilities and amazing qualities of humans, and that we need all voices at the table to build a future that brings out the best in both.