We took our research paper as https://www.irjet.net/archives/V7/i3/IRJET-V7I3418.pdf
The motivation behind this project is to provide a platform for the deaf community to be able to converse with those who do not know sign language. Our sign language detection system will help to identify certain hand movements as English words in correspondence with ASL American Sign Language and ISL Indian Sign Language. Since we are dealing with a relatively unique dataset, part of our project involved making our own data and preprocessing and feature extraction. This was followed by applying the Machine Learning Algorithm used in the research paper chosen.
Using OpenCV we first set up the frame to capture the images and video for real time sign language detection
With respect to Classification, we need to segment the skin part of the image first and regard the remaining part as noise. We performed training on the Skin Segmentation dataset from the University of California Irvine database.
A simple way to perform feature extraction is to use SIFT(Scale Inverse Feature Transform)features as it registers key points rather than finding the features manually.
1.SVM We will be using SVM classifier (SVC) which are capable of performing linear classifications as well as non-linear classifications using kernels. We will be using a gaussian kernel for this classification. Here we are using 20% of dataset as testset.
Classification report shows accuracy (~ 93%) Macro avg (precision 92% recall 91% f1-score 93% support 110)
2.KNN accuracy of 90.4%
3.Logistic Regression accuracy of 97.4%
4.Decision Trees accuracy of 77%