Enhancing the Applicability of Sign Language Translation

Abstract

This paper addresses a significant problem in American Sign Language (ASL) translation systems that has been overlooked. Current designs collect excessive sensing data for each ASL word and treat every sentence as new, requiring the collection of sensing data from scratch. This approach is time-consuming, taking hours to half a day to complete the data collection process for each user. As a result, it creates an unnecessary burden on end-users and hinders the widespread adoption of ASL systems. In this study, we identify the root cause of this issue and propose GASLA–a wearable sensor-based solution that automatically generates sentence-level sensing data from word-level data. An acceleration approach is further proposed to optimize the speed of sentence-level sensing data generation. Moreover, due to the gap between the generated sentence data and directly collected sentence data, a template strategy is proposed to make the generated sentences more similar to the collected sentence. The generated data can be used to train ASL systems effectively while reducing overhead costs significantly. With GASLA’s clear interface, it can be easily integrated into existing ASL systems. GASLA offers several benefits over current approaches it reduces initial setup time and future new-sentence addition overhead; it requires only two samples per sentence compared to around ten samples in current systems; and it improves overall performance significantly.

Publication
IEEE Transactions on Mobile Computing
Alex Jiakai XU
Alex Jiakai XU
Computer Science Student

My research interests include computer systems, programming languages, software designing, and cyberspace security.