A Novel Dataset for Video-Based Neurodivergent Classification Leveraging Extra-Stimulatory Behavior
A Novel Dataset for Video-Based Neurodivergent Classification Leveraging Extra-Stimulatory Behavior
Facial expressions and actions differ among different individuals at varying degrees of intensity given responses to external stimuli, particularly among those that are neurodivergent. Such behaviors affect people in terms of overall health, communication, and sensory processing. Deep learning can be responsibly leveraged to improve productivity in addressing this task, and help medical professionals to accurately understand such behaviors. In this work, we introduce the Video ASD dataset-a dataset that contains video frame convolutional and attention map feature data-to foster further progress in the task of ASD classification. Unlike many recent studies in ASD classification with MRI data, which require expensive specialized equipment, our method utilizes a powerful but relatively affordable GPU, a standard computer setup, and a video camera for inference. Results show that our model effectively generalizes and understands key differences in the distinct movements of the children. Additionally, we test foundation models on this data to showcase how movement noise affects performance and the need for more data and more complex labels.
Xuan Bac Nguyen、Han-Seok Seo、Khoa Luu、Manuel Serna-Aguilera
神经病学、精神病学计算技术、计算机技术
Xuan Bac Nguyen,Han-Seok Seo,Khoa Luu,Manuel Serna-Aguilera.A Novel Dataset for Video-Based Neurodivergent Classification Leveraging Extra-Stimulatory Behavior[EB/OL].(2025-08-22)[2025-09-06].https://arxiv.org/abs/2409.04598.点此复制
评论