|国家预印本平台
首页|Learning Segmentation from Radiology Reports

Learning Segmentation from Radiology Reports

Learning Segmentation from Radiology Reports

来源:Arxiv_logoArxiv
英文摘要

Tumor segmentation in CT scans is key for diagnosis, surgery, and prognosis, yet segmentation masks are scarce because their creation requires time and expertise. Public abdominal CT datasets have from dozens to a couple thousand tumor masks, but hospitals have hundreds of thousands of tumor CTs with radiology reports. Thus, leveraging reports to improve segmentation is key for scaling. In this paper, we propose a report-supervision loss (R-Super) that converts radiology reports into voxel-wise supervision for tumor segmentation AI. We created a dataset with 6,718 CT-Report pairs (from the UCSF Hospital), and merged it with public CT-Mask datasets (from AbdomenAtlas 2.0). We used our R-Super to train with these masks and reports, and strongly improved tumor segmentation in internal and external validation--F1 Score increased by up to 16% with respect to training with masks only. By leveraging readily available radiology reports to supplement scarce segmentation masks, R-Super strongly improves AI performance both when very few training masks are available (e.g., 50), and when many masks were available (e.g., 1.7K). Project: https://github.com/MrGiovanni/R-Super

Pedro R. A. S. Bassi、Wenxuan Li、Jieneng Chen、Zheren Zhu、Tianyu Lin、Sergio Decherchi、Andrea Cavalli、Kang Wang、Yang Yang、Alan L. Yuille、Zongwei Zhou

医学研究方法肿瘤学

Pedro R. A. S. Bassi,Wenxuan Li,Jieneng Chen,Zheren Zhu,Tianyu Lin,Sergio Decherchi,Andrea Cavalli,Kang Wang,Yang Yang,Alan L. Yuille,Zongwei Zhou.Learning Segmentation from Radiology Reports[EB/OL].(2025-07-08)[2025-07-25].https://arxiv.org/abs/2507.05582.点此复制

评论