IDEO CoLab Makeathon 2017

Eploring Artificial Intelligence for Accessibility

Emerging Tech, physical & mobile prototyping

50 designers, entrepreneurs, researchers, and makers selected from a pool of 700+ applicants came together in San Francisco to examine the intersection between emerging technologies and six core needs: Financial Services, Energy, Healthcare, Retail, Insurance and Automotive.

IDEO CoLab Makeathon Team photo

Team 6 at IDEO CoLab Makeathon

Discovery & Ideation

Understanding the User

Our team was assigned "Artificial Intelligence" and "Accessibility" as our themes. In order to understand the current needs and pain points of our target users, we conducted background research into visually impaired people. We found out that there are 285 million visually impaired people worldwide. Of that population:

  • 39 million are fully blind
  • 246 million have low vision
  • About 90% of the world's visually impaired live in developing countries.

Given that 90% of the world's visually impaired population live in developing countries, we felt it was important that our prototype:

  • Is financially accessible to less-wealthy demographics
  • Serves the needs of users in markets where internet access may not be consistent

With only 13% of the visually impaired population making up the fully blind, we shifted to designing an experience that would cater to not only those that are completely blind but also those partially visually impaired. As such, we thought of leveraging sensory experiences to "augment" the experience of accessing the physical world for visually impaired people:

  • Tactile: Touch-based cues
  • Auditory: Image and text-to-speech device and other auditory based cues
  • Visual: Basic and/or minimum level of visual object cues

Defining the Problem

Visually impaired internet users, just like anyone else, benefit tremendously from the information and services that the web has to offer. However, as web pages have progressively become more complex, modern screen readers are still falling short in helping the user skim websites efficiently by skipping over content they may not be interested in. Thus, we want to find a solution incorporating AI that will improve the efficiency of screen readers in assisting visually impaired users navigate long websites with complicated structures and lots of texts.

After we defined our problem space, we drew journey maps which helped us identify the emotional responses and pain points experienced by a visually impaired user as they navigate online websites and physical documents. One particular pain point we recognized was that a visually impaired user is unable to visually scan and skip over irrelevant content. We then used these pain points to further frame our problem using “how might we” (HMW) questions. We whittled it down to two questions that we would like to focus on exploring with our prototype.

  • How might we let visually impaired users navigate websites in a non-linear fashion?
  • How might we assist visually impaired users scan and navigate physical documents more efficiently?

Brainstorming & Ideation

Using the HWM questions, we brainstormed various solutions that leveraged the capabilities of modern screen readers and combined them with the ability to skip over content (such as ads or irrelevant information). The AI component comes in when the prototype learns over time the user's certain habits when scanning web pages and documents.

For a quick testable prototype, we repurpose old or recycled touchscreen phones in developed markets and use image- and text-based recognition to provide users with auditory and tactile cues using haptic feedback.

Notes from brainstorming and ideation

Notes from brainstorming and ideation

Digital & Physical Prototyping

As our prototype relies heavily on touch and voice commands, visual interfaces only play a secondary role. After sketching out the interactions with the interface and gathering feedback with my teammates, I started using Sketch to mock up a conceptual user interface, while my teammates used React Native to mock up the functionalities of an actual application. We used text recognition to capture texts on physical documents and haptic feedback and to allow users to navigate them based on tactile cues.

Screen prototypes


In a short span of 8 hours, I learned (amongst other things) how to facilitate discussion with teammates who are developers that bring with them a vastly different mindset when approaching a problem. Rather than insisting that my user-centric approach works best, it is much more productive to leverage on everyone's strengths and find the right timing to bring the discussion back to focusing on user experience.

This makeathon makes me realize the value UX designers add in the age of emerging technologies such as AI, blockchain, AR/VR where visual interfaces are no longer the focus. As a UX designer, I bring to the table the toolbox and process to help create sustainable and meaningful ecosystems and narratives.