Predicting treatment response from longitudinal images using multi-task deep learning
Mar 25, 2021·
,
,,,
Cheng Jin
1st Author
,Heng Yu
Co-1st Author
,Jia Ke
Co-1st Author
,Peirong Ding
Co-1st Author
,Yongju Yi
Xiaofeng Jiang
Xin DUAN
Jinghua Tang
Daniel T. Chang
Xiaojian Wu
Co-corresponding Author
Feng GAO
Co-corresponding Author
,Ruijiang Li
Corresponding Author
·
0 min readAbstract
Radiographic imaging is routinely used to evaluate treatment response in solid tumors. Current imaging response metrics do not reliably predict the underlying biological response. Here, we present a multi-task deep learning approach that allows simultaneous tumor segmentation and response prediction. We design two Siamese subnetworks that are joined at multiple layers, which enables integration of multi-scale feature representations and in-depth comparison of pre-treatment and post-treatment images. The network is trained using 2568 magnetic resonance imaging scans of 321 rectal cancer patients for predicting pathologic complete response after neoadjuvant chemoradiotherapy. In multi-institution validation, the imaging-based model achieves AUC of 0.95 (95% confidence interval: 0.91-0.98) and 0.92 (0.87-0.96) in two independent cohorts of 160 and 141 patients, respectively. When combined with blood-based tumor markers, the integrated model further improves prediction accuracy with AUC 0.97 (0.93-0.99). Our approach to capturing dynamic information in longitudinal images may be broadly used for screening, treatment response evaluation, disease monitoring, and surveillance.
Type
Publication
Nature Communications

Authors
Postdoc
I focus on medical image analysis and artificial intelligence for cancer research, including molecular subtyping and predictive modeling.

Authors
Professor
My research leverages AI and big data to improve diagnostics, prognostics, and ultimately, outcomes in cancer and other biomedical fields.