50 designers, entrepreneurs, researchers, and makers selected from a pool of 700+ applicants came together in San Francisco to examine the intersection between emerging technologies and six core needs: Financial Services, Energy, Healthcare, Retail, Insurance and Automotive.
Team 6 at IDEO CoLab Makeathon
Our team was assigned "Artificial Intelligence" and "Accessibility" as our themes. In order to understand the current needs and pain points of our target users, we conducted background research into visually impaired people. We found out that there are 285 million visually impaired people worldwide. Of that population:
Given that 90% of the world's visually impaired population live in developing countries, we felt it was important that our prototype:
With only 13% of the visually impaired population making up the fully blind, we shifted to designing an experience that would cater to not only those that are completely blind but also those partially visually impaired. As such, we thought of leveraging sensory experiences to "augment" the experience of accessing the physical world for visually impaired people:
Visually impaired internet users, just like anyone else, benefit tremendously from the information and services that the web has to offer. However, as web pages have progressively become more complex, modern screen readers are still falling short in helping the user skim websites efficiently by skipping over content they may not be interested in. Thus, we want to find a solution incorporating AI that will improve the efficiency of screen readers in assisting visually impaired users navigate long websites with complicated structures and lots of texts.
After we defined our problem space, we drew journey maps which helped us identify the emotional responses and pain points experienced by a visually impaired user as they navigate online websites and physical documents. One particular pain point we recognized was that a visually impaired user is unable to visually scan and skip over irrelevant content. We then used these pain points to further frame our problem using “how might we” (HMW) questions. We whittled it down to two questions that we would like to focus on exploring with our prototype.
Using the HWM questions, we brainstormed various solutions that leveraged the capabilities of modern screen readers and combined them with the ability to skip over content (such as ads or irrelevant information). The AI component comes in when the prototype learns over time the user's certain habits when scanning web pages and documents.
For a quick testable prototype, we repurpose old or recycled touchscreen phones in developed markets and use image- and text-based recognition to provide users with auditory and tactile cues using haptic feedback.
Notes from brainstorming and ideation
As our prototype relies heavily on touch and voice commands, visual interfaces only play a secondary role. After sketching out the interactions with the interface and gathering feedback with my teammates, I started using Sketch to mock up a conceptual user interface, while my teammates used React Native to mock up the functionalities of an actual application. We used text recognition to capture texts on physical documents and haptic feedback and to allow users to navigate them based on tactile cues.
In a short span of 8 hours, I learned (amongst other things) how to facilitate discussion with teammates who are developers that bring with them a vastly different mindset when approaching a problem. Rather than insisting that my user-centric approach works best, it is much more productive to leverage on everyone's strengths and find the right timing to bring the discussion back to focusing on user experience.
This makeathon makes me realize the value UX designers add in the age of emerging technologies such as AI, blockchain, AR/VR where visual interfaces are no longer the focus. As a UX designer, I bring to the table the toolbox and process to help create sustainable and meaningful ecosystems and narratives.