Search code examples
c#image-processinggesture-recognitionfeature-extractionhuman-computer-interface

Hand gesture recognition for sign language using SVM (Support Vector Machine)


I am a student assigned to do a project under sign Language interpretation. I have done all segmentation and morphological operations. Further, its time to classify gestures, and i have gone through different journals. I have little doubt that what features suitably satisfy my classification. I have chosen C# as a programming language and SVM Classifier for classification. Please list me out some possible features. If possible, well document with complete mathematics.

Features I have found: Shape Descriptors like aspect ratio, circularity, spreadness Hu - Variants and moment Features

Hand Segmented ImageFinal Edge Detected Image

I have recently found that re-sizing and normalization of an image is done before feature extraction. And they suggest the algorithm like:

  • Resize into certain resolution say 100*100
  • vertical allignment to first principle component of image.
  • lastly, reconstructing boundary box that fits the image.

I am with doubt that what if the finger alignment. If the training finger is 45 degree aligned with main palm axis and is 10 unit long, what if the testing data appears as it is aligned other than 45 degree and 5 unit long??

And Encountered features: Finger count and principal component analysis(PCA). But what does PCA physically means?


Solution

  • Finally,i have selected HU-Moment features for gesture recognition as it is translation, rotation and scale invariant, which has been proved. For the SVM part, i choose SVM.NET a wrapper of 'LIBSVM' in JAVA and C++ as C# as programming language