Real-time American Sign Langauge to Speech Conversion using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text output. Built with accessibility in mind.
⚠️ Note: This project is currently under development.
- Real-time hand gesture detection and tracking
- Image preprocessing pipeline
- Machine learning-based gesture classification
- Support for basic ASL gestures (only letters)
- Python 3.9+
- OpenCV (Computer Vision)
- TensorFlow/Keras (Machine Learning)
- cvzone (Hand Tracking)
- NumPy (Numerical Processing)
project /
│
├── Application.py
├── dataCollection.py
├── data
├── model/
│ ├── keras_model.h5
│ └── labels.txt
├── README.md
└── requirements.txt
- Text-to-speech functionality
- Expanded gesture vocabulary
- Improved model accuracy
- Support for word prediction
- Support for continuous sentence formation
- GUI interface
This project is in active development. Current focus areas:
- Improving hand tracking accuracy
- Expanding the gesture recognition dataset
- Enhancing real-time performance
- Python 3.9 or higher
- Webcam or camera device (laptop)
- Key packages: opencv-python, tensorflow, numpy, cvzone, mediapipe
More documentation will be added as the project develops.