OMNI - Improving Communications for the Hearing Impaired


OMNI is a digital product and wearable that improves communication for the hearing impaired within the workplace. Using AI, OMNI creates custom voices influenced by bio-feedback, combined with machine learning to translate sign language for improved communication. OMNI syncs with AR glasses to track gestures, display text and give environmental feedback.




Digital product, UI/UX,  Brand identity and concept glasses

Phones_and glasses.jpg


Research, discovery and design thinking drive the concepts and solutions to best respond to the challenge, the user, their pain points and ultimately a solution.

Research included identifying the challenges for the user and understanding current and emerging technologies.


Identify unique pain points for hearing impaired workers.

Evaluate existing solutions and devices including voice creation with AI, machine learning for sound classification and available technologies to track and translate sign language.

Consider parameters to define a voice, conversation types, environmental sounds and coworker interactions.

Product Development

Using data and resources from the discovery phase the user experience was explored, defined and developed. Developing user personas, feature hierarchies, process flows, a site map and wireframes was critical to OMNI.


Create a custom voice just for you

Personality parameters

Voice samples

Community feedback

Haptic feedback, color  and visual representation to sample voices


Conversations without a hitch

Track and translate sign language to text or voice

Display text with  AR glasses

Follow along in real time with group conversations 

Logo-01 copy.jpg