Segmentation only uses sparse annotations: Unified weakly and semi-supervised learning in medical images
Aug 1, 2022·

,,
,,
Feng GAO
1st Author, Corresponding Author
,Minhao Hu
Co-1st Author
Min-Er ZHONG
Co-1st Author
,Shixiang Feng
Xuwei Tian
Xiaochun Meng
Ze-Ping HUANG
Min-Yi LV
Tao Song
Xiaofan Zhang
Co-corresponding Author
,Xiangguang Zou
Co-corresponding Author
,Xiaojian Wu
Co-corresponding Author
·
0 min readAbstract
Since segmentation labeling is usually time-consuming and annotating medical images requires professional expertise, it is laborious to obtain a large-scale, high-quality annotated segmentation dataset. We propose a novel weakly- and semi-supervised framework named SOUSA (Segmentation Only Uses Sparse Annotations), aiming at learning from a small set of sparse annotated data and a large amount of unlabeled data. The proposed framework contains a teacher model and a student model. The student model is weakly supervised by scribbles and a Geodesic distance map derived from scribbles. Meanwhile, a large amount of unlabeled data with various perturbations are fed to student and teacher models. The consistency of their output predictions is imposed by Mean Square Error (MSE) loss and a carefully designed Multi-angle Projection Reconstruction (MPR) loss. Extensive experiments are conducted to demonstrate the robustness and generalization ability of our proposed method. Results show that our method outperforms weakly- and semi-supervised state-of-the-art methods on multiple datasets. Furthermore, our method achieves a competitive performance with some fully supervised methods with dense annotation when the size of the dataset is limited.
Type
Publication
Medical Image Analysis



