Trigger Detection System for American Sign Language using Deep Convolutional Neural Networks

Document Type

Conference Article

Publication Title

ACM International Conference Proceeding Series

Abstract

Automatic trigger-word detection in speech is a well known technology nowadays. However, for people who are incapable of speech or are in some silence zone, such voice activated trigger detection systems find no use. We have developed a trigger detection system using the 24 static hand gestures of the American Sign Language (ASL). Our model is primarily based on Deep Convolutional Neural Network (Deep CNN) as they are capable of capturing interesting visual features at each hidden layer. We aim at constructing a customisable switch that can turn ‘on’ if it finds a given trigger gesture in any video that it receives and stays ‘off’ if it does not. The model was trained on images of various hand gestures in a multi-class classification setting. This allows the user to choose a custom trigger gesture for oneself. To test the efficiency of such a model in the trigger detection process, we have made 7,000 videos (each 10s long) consisting of random images from the test set which were never shown to the model during the training process. It is experimentally shown that such a system has a better performance than the other state-of-the art techniques used in static hand gesture image recognition tasks. This approach also finds real-time application and can be applied to develop small scale devices which trigger any particular response by capturing the gestures made by the people.

DOI

10.1145/3291280.3291783

Publication Date

12-10-2018

This document is currently not available here.

Share

COinS