American and russian sign language dactyl recognition
Sign languages are the main way for people from deaf community to communicate with other people. In this paper, we have compared several real-time sign language dactyl recognition systems using deep convolutional neural networks. Our system is able to recognize words from natural language gestured using signs for each letter. We evaluate our approach on American (ASL) and Russian (RSL) sign languages. For ASL, we trained on dataset prepared by Massey University, Institute of Information and Mathematical Sciences, for RSL we collect our own dataset, which we aim to enlarge together with RSL community in Russia. The results showed 100% accuracy for ASL Massey dataset, while RSL recognition quality is behind sufficient quality due to much more complex nature of real-world RSL dataset.