Our Mission

Helping guide those through the unknown


According to the World Health Organization (WHO), roughly 2.2 billion people in the world live with some form of visual impairment. We want to empower those individuals through our application, Eye Robot, which provides a quick and easy way for user to navigate their way through indoor environements via real-time object detection and corresponding audio queues

Our Product

The Eye Robot is a visual impairment assistant that utilizes object detection, distance estimation, and text-to-speech (TTS) to help its users navigate through indoor settings.

Two versions of the applications are available for download:
Eye Robot Pro and Eye Robot Lite.

You can click the link below to view a demonstration of both products. A QR code to download the app is provided at the end of the video

View Demo

Our Team

Mamesa

Mamesa El

Data Scientist

mamesa.el@berkeley.edu

LinkedIn Page
Sam

Sam Gupta

ML Engineer

sambhav.gupta@berkeley.edu

LinkedIn Page
Sneha

Sneha Narain

Systems Engineer

sn3ae@berkeley.edu

LinkedIn Page

Credits

Our project was made possible with the support of the following people

Our Capstone Professors

Cornelia Ilin and Zona Kostic

Our Peers in the UC Berkeley

MIDS Program Class of 2023

AWS Expert

Robert Wang

Developers behind Ultralytics

Github Page

Developers behind Fiftyone Toolkit

Main Page

Developers behind LiDAR and Text-To-Speech (TTS)

Main Page

Base object detection and LiDAR code reference

Github Page

Developers behind the grayscale Jekyll template

Github Page

Contact

If you are interested in learning more about our application or have any general questions, feel free to send an email with your inquiry.

sn3ae@berkeley.edu