Voca

UX design   ·   UI design   ·   accessibility   ·   futures

 

HCI Research Project (Fall 2017)
Instructor: Jeffrey Bigham
Team: Kailin Dong, Tab Chao, Marie Shaw, Lucy Yu

·   ·   ·  

"How can we assist the deaf/hard-of-hearing to use voice command devices?"
— 

Today, people are becoming increasingly reliant on technology to make life more convenient, efficient, and enjoyable. Consequently, devices such as Amazon's Echo, Google Home, and Apple's Homepod are proliferating in homes and other shared spaces. We envision that these solely speech-based technologies will only be on the increase, making speech and hearing — the most direct means of communication — all the more necessary. However, because these devices are controlled purely through speech, they fail to address the deaf/hard-of-hearing population, who require alternative methods  of communication.

 

GoogleHome_4.png

 

PROBLEM SPACE
— 

Working with Professor Jeffrey Bigham, my team (consisting of designers and engineers) sought to develop accessible technology that could assist the deaf/hard-of-hearing population to access speech-enabled devices. Together, we came up with a solution in the form of a phone app that translates the users' text into speech, and the device's spoken response into text, bridging the communication gap between the two. 

 

Untitled_Artwork.png

 

The idea is not only to help users communicate with home assistants, but to provide a universal design solution that enables users to interact with any voice-based interfaces, including future home appliances (microwave, fridge) and digital wearables (smart fabrics) that may eventually replace screen interfaces due to cheaper manufacturing.

 

 

LITERARY RESEARCH
— 

Before we dove into the creation process, my group decided to take a look at resources for designing for accessibility. We looked at academic research papers including our own Professor Bigham's On How Deaf People Might Use Speech to Control DevicesMicrosoft's inclusive design, Nick Babich's Accessible Interface Design, and Alexa's Accessibility Features guide. It was really great to see how many resources are out there and how many people/companies are emphasizing the importance of incorporating accessibility early into the design process!

On an additional note, I was also taking Professor Bigham's Accessibility class at CMU during the same time because of personal interest in inclusive design, and sometimes referenced my own notes and projects for this project as well.

 

 from Microsoft's inclusive design kit.

from Microsoft's inclusive design kit.

·   ·   ·  

 
 

Some insights distilled from our research include:

  • Current speech recognition technology is limited. Due to the deaf accent and a variety of deaf speech (speech produced by deaf individuals), most current speech recognition technology cannot interpret the speech of deaf and hard-of-hearing people. This makes it hard for them to interact with voice-based interfaces directly.
     
  • Everyone can have situational disability. Disability can be temporary or circumstantial, and as designers we should learn how people adapt to the world around them. To prevent mismatched human interaction, we need to design a variety of ways for people to participate.
     
  • Inclusive design recognizes exclusion. Even though we are explicitly designing the solution for one accessibility problem, it is important to keep in mind other potential accessibility problems that we are overlooking (such as ensuring that the colour contrast of the interface is high enough for users with low vision).  As Microsoft beautifully puts it, "Solve for one, extend to many."

·   ·   ·  

 

 

 

We also brainstormed some initial contexts for which our design can be used:

flow-process.jpg

 

 

USER FLOW
— 

In order to learn more about the nature of voice-based interaction, we needed a subject of study. Our team decided to base our studies on Amazon's Alexa, on an Echo Dot. As we simulate communication with Alexa through converting text to speech (initially on Google Translate), we identified areas where pain points may occur in the process of sending and retrieving information. This helped us determine some of the different scenarios that may occur:

  • Alexa receives speech from Voca and gives the right response
  • Alexa doesn't pick up the speech from Voca (no response)
  • Alexa does not understand the question ("Sorry, I don't know that.") 
  • Alexa gives the wrong response
  • Alexa does not have the skill ("The skill _____ can help you with that. Would you like to enable it?")
  • Alexa gives an answer but Voca can't pick up the speech

 

These scenarios are explored in our user flow:

 
user flow.png
 

 

 

SCENARIO WALKTHROUGHS
— 

Below are some of the early screens showing the detailed view of specific scenarios.

 

Speaking in progress:

Screen Shot 2018-02-19 at 12.49.13 AM.png

Listening in progress:

Screen Shot 2018-02-19 at 12.49.46 AM.png

Favourite command / reply command:

Screen Shot 2018-02-19 at 12.56.03 AM.png
 
 
 

UI FEATURES + ASSETS
— 

After playing with Alexa and examining the user flow, we decided on some key features to include in Voca.

Untitled_Artwork.png

 

·   ·   ·  

These features include:

  • Conversion of text and speech: Voca acts as an intermediator between the voice command device and user. Voca has text-to-speech capabilities when user wants to command the device, and provides speech-to-text output for the user when receiving information from the device.
     
  • Convenient access to command: To make it more efficient and frictionless when users interact with Voca, Voca provides suggested commands in tags, default commands in text box, and allows users to favorite commands.
     
  • Clear visual feedback: Because users are hard of hearing, we implemented more visual feedback to help users understand the context. Explicit icons and words are used when users are communicating with Voca and voice-controlled device.
     
  • Universal design for all voice-controlled devices: Voca allows users to interact with different voice-controlled devices by customizing command templates for different options. Voca can serve as a universal tool for deaf users to interact with any speech-controlled interface. 

·   ·   ·  

 

 

IMG_0409.jpg
 
v.png
 

 

We also did different iterations of visual cues indicating to the user that Voca is speaking. We wanted to ensure that users are able to see their full textual input (to verify that it's correct), and that they are able to see the progress status when Voca is speaking. It was important for us to keep in mind that the user is fully dependent on the visual cues to indicate Voca's speaking progress, meaning we had to convey the status as explicitly as possible. 

 

iPhone+7+Copy+16.jpg

 

 

PROTOTYPING
— 

Thanks to our awesome engineer/developer Marie Shaw, we were able to get a first working prototype online! The site for the preliminary prototype is hosted here: https://github.com/mnshaw/voca.

 
 

 

 

VISUAL STYLING
— 

Having consolidated the UX flow, we decided it was time to starting giving Voca a visual voice. Here are some of the early logo iterations that we played with. We wanted to convey that this tool is: friendlyaccessible, and vocal.

Screen Shot 2018-02-12 at 3.13.05 AM.png

We decided a cleaner, simpler wordmark would be most appropriate, as is promotes universality. 

Screen Shot 2018-02-12 at 3.22.19 AM.png

After multiple various subtle changes in the form, we decided on a geometric form that hints the "a" as a speech bubble. It combines both audacity and playfulness. 

 

We chose a vibrant blue for the UI styling because it has high contrast even in greyscale, ensuring that low vision users  also have an easier time use V.

Screen Shot 2018-02-12 at 3.26.44 AM.png
Screen Shot 2018-02-12 at 3.26.44 AM copy.png
 Color Safe creates accessible color palettes for your site or app based on Web Content Accessibility Guidelines (WCAG).

Color Safe creates accessible color palettes for your site or app based on Web Content Accessibility Guidelines (WCAG).

 
 
Screen Shot 2018-02-13 at 12.13.10 AM.png
Screen Shot 2018-02-13 at 12.07.54 AM.png

 

 

HI-FI MOCKUP
— 

 
 
 

 

 

 

AN ALTERNATIVE MENTAL MODEL
— 

After testing our final prototypes with 2 people who were not familiar with the concept of our project, we noticed that people were confused by the "switch devices" button because there is no visual indicator that the device is different. In the speculative future where voice-command interfaces are omnipresent, it might instead be more interesting to borrow Facebook Messenger's framework in characterizing each different device (i.e. TV, microwave, refrigerator, etc.) as individual "contacts," saving each dialogue within the individual chat with devices. This is a new mental model I would like to explore in the future.

Screen Shot 2018-02-21 at 11.34.26 AM.png
 
v.png
v copy.png
 
v copy 2.png
 

 

 

FUTURE DIRECTION
— 

Upon reflecting on what we've built, it might be interesting to broaden our scope to a wider range of contexts for the deaf/hard-of-hearing in any day-to-day scenarios, including communicating with other people and not just devices. Considering more contexts (like the contexts we've brainstormed during the early research phase) could grant this app beyond practical functions, allowing it to become a tool for creating emotionally meaningful experiences.

Looking forward, I would love to explore how this app could afford interactions with the user's family and friends.

 

Untitled_Artwork 3.png