|国家预印本平台
首页|VaCDA: Variational Contrastive Alignment-based Scalable Human Activity Recognition

VaCDA: Variational Contrastive Alignment-based Scalable Human Activity Recognition

VaCDA: Variational Contrastive Alignment-based Scalable Human Activity Recognition

来源:Arxiv_logoArxiv
英文摘要

Technological advancements have led to the rise of wearable devices with sensors that continuously monitor user activities, generating vast amounts of unlabeled data. This data is challenging to interpret, and manual annotation is labor-intensive and error-prone. Additionally, data distribution is often heterogeneous due to device placement, type, and user behavior variations. As a result, traditional transfer learning methods perform suboptimally, making it difficult to recognize daily activities. To address these challenges, we use a variational autoencoder (VAE) to learn a shared, low-dimensional latent space from available sensor data. This space generalizes data across diverse sensors, mitigating heterogeneity and aiding robust adaptation to the target domain. We integrate contrastive learning to enhance feature representation by aligning instances of the same class across domains while separating different classes. We propose Variational Contrastive Domain Adaptation (VaCDA), a multi-source domain adaptation framework combining VAEs and contrastive learning to improve feature representation and reduce heterogeneity between source and target domains. We evaluate VaCDA on multiple publicly available datasets across three heterogeneity scenarios: cross-person, cross-position, and cross-device. VaCDA outperforms the baselines in cross-position and cross-device scenarios.

Soham Khisa、Avijoy Chakma

计算技术、计算机技术

Soham Khisa,Avijoy Chakma.VaCDA: Variational Contrastive Alignment-based Scalable Human Activity Recognition[EB/OL].(2025-05-07)[2025-06-14].https://arxiv.org/abs/2505.04907.点此复制

评论