|国家预印本平台
首页|Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning

来源:Arxiv_logoArxiv
英文摘要

The count of people suffering from various levels of hearing loss reached 1.57 billion in 2019. This huge number tends to suffer on many personal and professional levels and strictly needs to be included with the rest of society healthily. This paper presents a proof of concept of an automatic sign language recognition system based on data obtained using a wearable device of 3 flex sensors. The system is designed to interpret a selected set of American Sign Language (ASL) dynamic words by collecting data in sequences of the performed signs and using machine learning methods. The built models achieved high-quality performances, such as Random Forest with 99% accuracy, Support Vector Machine (SVM) with 99%, and two K-Nearest Neighbor (KNN) models with 98%. This indicates many possible paths toward the development of a full-scale system.

Basma Kalandar、Ziemowit Dworakowski

通信计算技术、计算机技术电子技术应用

Basma Kalandar,Ziemowit Dworakowski.Sign Language Conversation Interpretation Using Wearable Sensors and Machine Learning[EB/OL].(2023-12-19)[2025-07-02].https://arxiv.org/abs/2312.11903.点此复制

评论