A Deep Neural Framework for Continuous Sign Language Recognition文献综述

 2021-11-05 07:11

毕业论文课题相关文献综述

1.Introduction: Sign language is computer vision based intact intricate language that engages signs shaped by hand moments together with face expression and hand shapes. signing is a language for communication among people with low or no hearing sense.Sign language is employed by the hearing-impaired people for his or her communication. Using signing, we will communicate letters words or maybe sentences of general speech by using different hand signs and different hand gestures. this kind of communication helps hearing impaired people to speak or express their views. These sort of systems bridge or channel between normal people and hearing-impaired people. Human speech captured in digital format generates a 1D signal for processing whereas human signing generates 2D signals from image or video data. Classification of gestures are often identified as both static and dynamic. Static gestures involve a time invariant finger orientation whereas dynamic gestures support a time varying hand orientations and head positions. The proposed four camera model for signing Recognition (SLR) is a computer vision-based approach and doesn't employ motion or colored gloves for gesture recognition. Efficient sign language recognition systems require knowledge of feature tracking and hand orientations. Researchers during this field approached gesture classification in two major ways namely glove based and vision based. Other methods used frequency gloves to tackle the matter. the tactic is less complicated and fast to implement on computer devices with complex hardware problems to implement. Computer vision requires no electronic hardware where advanced image processing algorithms can do hand shape matching and hand tracking on the captured video data. The missing attributes in glove based approach like facial expressions and sign articulation are handled effectively using computer vision algorithms. Precision inhibits the usability of computer vision techniques making it a resourceful research field.2.Process: This research introduces a completely unique method to bring video-based language closer to real time application. Pre-filtering, segmentation and feature extraction on video frames creates a sign language feature space. Artificial Neural Network (ANN) classifiers also as Minimum Distance (MDC) on the sign feature environment should be well trained and tested repeatedly. Sobel edge operator's (SEO) power is enhanced with morphology and adaptive thresholding giving a near perfect segmentation of hand and head portions are adaptable to the applied picture of the camera. The Word Matching Score (WMS) gives the performance of the proposed method with a mean WMS of around 85.58% for MDC and 90% for ANN with a little variation of 0.3 s in classification times. Neural network classifiers with fast training algorithms will definitely make this novel of signing a recognized application. Most of SLR systems concentrate on putting an easy constant background with signers shirt matching the background. In cluttered video backgrounds tracking hands has become simpler but tracking each finger movements remains quite challenging task. Here researchers believe 3Dimensional body centered space of the signer may be used effectively for extracting finger movements. The 3D locations of fingers are referenced by setting points of knuckles in space. Creating the 3D points as spatial domain information for hand tracking and hand shaping is challenging for computer vision engineers.3.Methodolgy: To make sign language recognition system, we introduce a selfie signing recognition system capturing signs employing a smart phone front camera. The signer holds the selfie stick in one hand and signs together with their other hand. A sentence in signing is recorded employing a camera and therefore the obtained video is split into several frames. Each register a frame taken out of a collection of frames is processed and therefore the features are extracted so the features apply for nearby preceding and succeeding frames. The input sign video is processed for corresponding text or voice outputs for normal people to know a hearing-impaired person without the necessity for an interpreter. More research is often concerned on selfie-based signing recognition with real time constraints like non-uniform background, varied lighting and signer independence, to form the system more independent. Basic signing system supported by the 5 parameters and that they are hand and head recognition, hand and head orientation, hand movement, shape of hand and site of hand and head (depends abreast of back ground). Among the five parameters there are two parameters which are most vital and that they are hand and head orientation and hand movement during a particular direction. These systems help in recognizing the sign languages with better accuracy. Hand shapes and head are segmented and acquire feature vectors these feature vectors which are classified and given to neural networks for training.Two major problems surfaced during implementation: Phase one: The signs are preferably single handed and therefore the other video background variations thanks to the movement of selfie stick within the hand of the signerPhase two: The background of the signer concerning the contrast of the light where signer is present

免费ai写开题、写任务书: 免费Ai开题 | 免费Ai任务书 | 降AI率 | 降重复率 | 论文一键排版