← Projects

American Sign Language Recognition System

Computer Vision OpenCV Keras cvzone Python

American Sign Language (ASL) is a complex and nuanced language used by the Deaf and Hard of Hearing community to communicate. Due to the lack of automatic sign language recognition systems, communication barriers still exist between the Deaf community and those who do not know ASL. Recent advancements in machine learning and computer vision have enabled the development of ASL recognition systems that can translate sign language into text in real-time.

ASL Sign Symbols Chart

How It Works

The system is implemented using OpenCV, a hand tracking module, and a classification module. It captures real-time video using cv2.VideoCapture(), then uses HandDetector from the cvzone library to detect hands in the feed.

Once a hand is detected, the system crops the video around the hand region using bounding box coordinates, resizes it to 300×300 pixels, and passes it to a Keras classification model. The model is trained to classify 7 different signs. Predictions are displayed on screen alongside a bounding box drawn around the detected hand.

The system also includes a data collection script (data_collection.py) that lets users capture and save hand sign images for training and testing.

Demo Video

Future Directions

Summary

The ASL recognition system provides a simple and effective way to recognize hand signs in real-time video feeds. Further improvements could include more advanced deep learning classifiers, better image pre-processing, and handling complex sign gestures or occlusion.

ASL Recognition in action View on GitHub