Sign Language is an important mode of communication for deaf and dumb people. This project detects static images of sign language in real-time and translates it for normal people to understand.
1. Create a new folder and extract the code file provided into it. Open the README.txt file and go to the link provided. Download the Tensorflow folder into the newly created folder on your computer. You are now ready to proceed with the execution.
2. Open the CreateDataset.ipynb file on jupyter notebook. Follow and run all the code blocks. This is the code for creating the dataset.
3. We use functions of OpenCV software in order to automate the process of collecting webcam data.
4. LabelImg software has to be downloaded in order to create bounding boxes around each image and to label it. We then obtain XML files for each image which defines the bounding box of every image.
LabelImg Software: https://github.com/tzutalin/labelImg
5. We divide the images into training and test data in the ratio 8:2 and store them in separate folders.
6. After following these steps carefully you have to open the Detection-SSDMobileNet.ipynb file on jupyter notebook.
7. Follow the code and comments properly in order to understand what is going on and how the code works.
8. By the end of the project you will have a project which can detect and translate real-time sign language.
Submitted by G V Ganesh Maurya (mau23rya)
Download packets of source code on Coders Packet