There are 70 million people who use sign language to communicate around the world. However, communication becomes an issue when we realize that most of us do not know any kind of sign language. This becomes a big problem in public spaces, and it affects the daily lives of millions of people every day. My ASL Translator takes one of 5 signs from ASL (Hello, Thank You, I love You, Yes, and No) and translates into text in REAL TIME. The project uses the Python Object Detection API, Tensorflow, and its different modules. I used labelIMG to collect images and label them using my Webcam. I also used Jupyter Notebook to run the code to collect pictures, train, and test them. Finally, Visual Studio was used to open my webcam and be able to label the 5 signs I created labels for. The percentage next to the sign name represents how accurate the gesture is compared to the summary of all 15 images that were uploaded for each sign.
Get to know more about how my ASL translator can bridge communication gaps in public spaces!
Get to know how I captured images, labeled, trained, and tested my project.
Here are a few images of my labeling and training process, since that was the step I had the most fun with!