|国家预印本平台
首页|CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection

CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection

CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection

来源:Arxiv_logoArxiv
英文摘要

Temporal medical image analysis is essential for clinical decision-making, yet existing methods either align images and text at a coarse level - causing potential semantic mismatches - or depend solely on visual information, lacking medical semantic integration. We present CheXLearner, the first end-to-end framework that unifies anatomical region detection, Riemannian manifold-based structure alignment, and fine-grained regional semantic guidance. Our proposed Med-Manifold Alignment Module (Med-MAM) leverages hyperbolic geometry to robustly align anatomical structures and capture pathologically meaningful discrepancies across temporal chest X-rays. By introducing regional progression descriptions as supervision, CheXLearner achieves enhanced cross-modal representation learning and supports dynamic low-level feature optimization. Experiments show that CheXLearner achieves 81.12% (+17.2%) average accuracy and 80.32% (+11.05%) F1-score on anatomical region progression detection - substantially outperforming state-of-the-art baselines, especially in structurally complex regions. Additionally, our model attains a 91.52% average AUC score in downstream disease classification, validating its superior feature representation.

Yuanzhuo Wang、Junwen Duan、Xinyu Li、Jianxin Wang

医学研究方法计算技术、计算机技术

Yuanzhuo Wang,Junwen Duan,Xinyu Li,Jianxin Wang.CheXLearner: Text-Guided Fine-Grained Representation Learning for Progression Detection[EB/OL].(2025-05-11)[2025-06-30].https://arxiv.org/abs/2505.06903.点此复制

评论