An interactive desktop application that recognizes American Sign Language (ASL) gestures in real time using computer vision and machine learning.
Enables users to communicate through ASL by:
- Capturing live video from a webcam
- Recognizing ASL letters or words using a neural network model
- Displaying recognized text dynamically in a user-friendly GUI
- Real-time ASL recognition via webcam input
- Built with Keras / TensorFlow, using a trained model to interpret hand gestures
- Simple GUI interface built in Python (Tkinter) to display video feed and recognized text
- Gesture-to-text mapping for clear on-screen feedback
- Timestamped logging of input sessions or files
| Component | Tools Used |
|---|---|
| Machine Learning | TensorFlow, Keras |
| GUI Framework | Tkinter (Python) |
| Computer Vision | OpenCV for video capture & processing |
| Data Flow | Real‑time input, model inference, GUI display |