Many products and services in modern society are increasingly becoming not only digital but also conversational. Additionally, current natural language processing research is primarily focused on spoken languages, e.g., English, Mandarin, German, etc. With the advent of the smart speaker and voice-activated digital assistants, there is a renewed focus on speech recognition. Unfortunately, the auditory nature of such language technology inherently excludes deaf and mute users. Therefore, there is an urgent need for high-quality sign language recognition technology that only relies on standard RGB input, in order to ensure that the deaf and mute members of our society are not left behind.
This project aims to use scientific methods to do neural sign language recognition & translation. We work on extending an open-source library like transformers to include state of the art sign language recognition methods. We also intend to improve the state of art in continuous sign language recognition.