TY - GEN
T1 - Towards a device-independent deep learning approach for the automated segmentation of sonographic fetal brain structures
T2 - Medical Imaging 2022: Computer-Aided Diagnosis
AU - Lad, Abhi
AU - Narayan, Adithya
AU - Shankar, Hari
AU - Jain, Shefali
AU - Vyas, Pooja Punjani
AU - Singh, Divya
AU - Hegde, Nivedita
AU - Atada, Jagruthi
AU - Thang, Jens
AU - Nee, Saw Shier
AU - Govindarajan, Arunkumar
AU - Ps, Roopa
AU - Pai, Muralidhar V.
AU - Vasudeva, Akhila
AU - Radhakrishnan, Prathima
AU - Devalla, Sripad Krishna
N1 - Publisher Copyright:
© 2022 SPIE.
PY - 2022
Y1 - 2022
N2 - Quality assessment of prenatal ultrasonography is essential for the screening of fetal central nervous system (CNS) anomalies. The interpretation of fetal brain structures is highly subjective, expertise-driven, and requires years of training experience, limiting quality prenatal care for all pregnant mothers. With recent advancement in Artificial Intelligence (AI), computer assisted diagnosis has shown promising results, being able to provide expert level diagnosis in matter of seconds and therefore has the potential to improve access to quality and standardized care for all. Specifically, with advent of deep learning (DL), assistance in precise anatomy identification through semantic segmentation essential for the reliable assessment of growth and neurodevelopment, and detection of structural abnormalities have been proposed. However, existing works only identify certain structures (e.g., cavum septum pellucidum [CSP], lateral ventricles [LV], cerebellum) from either of the axial views (transventricular [TV], transcerebellar [TC]), limiting the scope for a thorough anatomical assessment as per practice guidelines necessary for the screening of CNS anomalies. Further, existing works do not analyze the generalizability of these DL algorithms across images from multiple ultrasound devices and centers, thus, limiting their real-world clinical impact. In this study, we propose a deep learning (DL) based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D). We developed a custom U-Net variant that uses inceptionv4 block as a feature extractor and leverages custom domain-specific data augmentation. Quantitatively, the mean (10 structures; test sets 1/2/3/4) Dice-coefficients were: 0.827, 0.802, 0.731, 0.783. Irrespective of the USG device/center, the DL segmentations were qualitatively comparable to their manual segmentations. The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality (a challenge to generalizibility) by using UMAP analysis. Its clinical translation can assist a wide range of users across settings to deliver standardized and quality prenatal examinations.
AB - Quality assessment of prenatal ultrasonography is essential for the screening of fetal central nervous system (CNS) anomalies. The interpretation of fetal brain structures is highly subjective, expertise-driven, and requires years of training experience, limiting quality prenatal care for all pregnant mothers. With recent advancement in Artificial Intelligence (AI), computer assisted diagnosis has shown promising results, being able to provide expert level diagnosis in matter of seconds and therefore has the potential to improve access to quality and standardized care for all. Specifically, with advent of deep learning (DL), assistance in precise anatomy identification through semantic segmentation essential for the reliable assessment of growth and neurodevelopment, and detection of structural abnormalities have been proposed. However, existing works only identify certain structures (e.g., cavum septum pellucidum [CSP], lateral ventricles [LV], cerebellum) from either of the axial views (transventricular [TV], transcerebellar [TC]), limiting the scope for a thorough anatomical assessment as per practice guidelines necessary for the screening of CNS anomalies. Further, existing works do not analyze the generalizability of these DL algorithms across images from multiple ultrasound devices and centers, thus, limiting their real-world clinical impact. In this study, we propose a deep learning (DL) based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D). We developed a custom U-Net variant that uses inceptionv4 block as a feature extractor and leverages custom domain-specific data augmentation. Quantitatively, the mean (10 structures; test sets 1/2/3/4) Dice-coefficients were: 0.827, 0.802, 0.731, 0.783. Irrespective of the USG device/center, the DL segmentations were qualitatively comparable to their manual segmentations. The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality (a challenge to generalizibility) by using UMAP analysis. Its clinical translation can assist a wide range of users across settings to deliver standardized and quality prenatal examinations.
UR - http://www.scopus.com/inward/record.url?scp=85132797913&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132797913&partnerID=8YFLogxK
U2 - 10.1117/12.2610741
DO - 10.1117/12.2610741
M3 - Conference contribution
AN - SCOPUS:85132797913
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2022
A2 - Drukker, Karen
A2 - Iftekharuddin, Khan M.
PB - SPIE
Y2 - 21 March 2022 through 27 March 2022
ER -