American Sign Language Recognition Using CNN

Authors

  • U. Hari Priya UG Student, Department of Computer Science and Engineering, College of Engineering, Chengannur, India
  • S. Krishna Prasad UG Student, Department of Computer Science and Engineering, College of Engineering, Chengannur, India
  • Meba Meria Jacob UG Student, Department of Computer Science and Engineering, College of Engineering, Chengannur, India
  • R. Radhu Krishna UG Student, Department of Computer Science and Engineering, College of Engineering, Chengannur, India
  • P. R. Vinod Assistant Professor, Department of Computer Science and Engineering, College of Engineering, Chengannur, India

Keywords:

Convolutional Neural Network (CNN), Spatio-temporal features, Vision based techniques

Abstract

Speech impairment is a disability that affects person's ability to communicate using speech and by hearing. Non signers use other medium of communication such as sign language. Although sign language is ubiquitous, non-signers found it challenging to communicate with signers. This paper discusses some of the methods (SVM, KNN, Logistic regression and CNN) that can be used to implement a method to help make the communication of a non-signer with a signer much easier. At the end of the discussion, it was found that convolutional neural network is the most effective technique among the other methods. The main focus is to create a vision-based application which offers sign language recognition to text thereby enabling dynamic communication between them at real time.

Downloads

Download data is not yet available.

Downloads

Published

28-07-2020

How to Cite

[1]
U. . Hari Priya, S. . Krishna Prasad, M. M. Jacob, R. . Radhu Krishna, and P. R. . Vinod, “American Sign Language Recognition Using CNN”, IJRESM, vol. 3, no. 7, pp. 333–336, Jul. 2020.

Issue

Section

Articles