Development of Novel Deep Learning Models with Improved Generalizability for Medical Image Analysis

By: Contributor(s): Material type: BookBookPublication details: Bangalore: Indian Insitute of Science, 2023.Description: xxviii,164p.: col.ill.; e -Thesis 17.39Dissertation: PhD; 2023;Computational & Data Sciences Subject(s): DDC classification:
  • 617.54 NAV
Online resources: Dissertation note: PhD; 2023;Computational & Data Sciences Summary: Medical imaging is a process of visualization of disease/tissue in a non-invasive manner. Several imaging techniques like computed tomography (CT), magnetic resonance imaging (MRI), optical coherence tomography (OCT), and ultrasound are being utilized for qualitative and quantitative diagnosis of various diseases. Fully data-driven deep learning methods have shown great promise in the automated analysis of medical images. However, access to large amounts of heterogeneous datasets (data availability), predictions across imaging protocols/scanners (domain adaptation), and computational resources (deployability) have been the major challenges in the clinical adaptation of deep learning methods for medical image analysis. This thesis work aims to design and develop generalizable deep learning models that will be able to address these challenges. Specifically, the thesis work focuses on various two- and three-dimensional medical image analysis tasks spanning chest computed tomography, spectral-domain optical coherence tomography, and quantitative susceptibility mapping in magnetic resonance imaging. (1) Chest Computed Tomography (CT) imaging has been indispensable for staging and managing coronavirus disease 2019 (COVID-19), and the evaluation of anomalies or abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 infection in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. This thesis work developed an anamorphic depth embedding-based lightweight convolutional neural network (CNN), called Anam-Net, to segment anomalies in COVID-19 chest CT images. The results from chest CT images across different imaging scanners showed that the proposed Anam-Net resulted in improved dice similarity scores for abnormal and normal regions of the lung. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (CovSeg) embedded with Anam-Net, to demonstrate its suitability for point-of-care platforms. (2) Optical Coherence Tomography (OCT) imaging has become a point-of-care imaging modality for diagnosing retinal diseases like diabetic macular edema, drusen, and choroidal neovascularization. Varying speckle noise in the spectral-domain OCT images across imaging protocols and scanners worsens the performance of existing deep learning models for predicting retinal diseases. Also, the existing deep learning models for retinal disease prediction are heavy and require a sophisticated computing environment to train and deploy. Generalized lightweight deep learning models that could work across varying noise levels in the data (different acquisition protocols) that can provide an automated diagnosis on an edge platform are highly appealing in the clinic. This thesis work developed noise regularized lightweight deep learning models trained via self distillation for improving the deployability and the generalizability of automated retinal diagnosis using OCT images. The developed approach was validated on simulated and real-time noisy OCT B-scans captured under different acquisition settings. The developed deep learning model has significantly outperformed the existing methods with improvement (as high as 14\%) in precision, accuracy, and F1-score to show that the self-distillation framework can provide more generalizability for automated retinal disease diagnosis. (3) Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique for quantifying the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although the existing deep learning based QSM methods can yield high-quality reconstructions, they are highly biased toward the training data distribution with less scope for generalizability. This thesis work developed a two-step reconstruction approach to improve the model-based methods, including deep learning based QSM reconstruction to reduce the inherent bias towards the training data distribution without adapting the model weights. The susceptibility map prediction obtained from model-based reconstruction methods was refined in the developed framework to ensure consistency with the measured local field. The developed method was also validated on existing deep learning models as well as other model-based approaches for QSM of the brain. The developed approach yielded improved reconstructions for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings. (4) Rolling Convolution Filters were developed as part of this thesis work to address the challenge of deep learning models with an extremely high number of parameters for a limited-class medical image analysis task. The aim here was to develop a generalizable task agnostic extremely lightweight deep learning models for medical image analysis. These deep learning models were built using a novel filter design element called rolling convolution filters. A set of new filters were generated by performing a channel-wise rolling (or circular shifting) operation on a single base filter. Each new filter developed was unique, but the number of learnable parameters was restricted to the base filter. The developed rolling convolutional filter design substantially reduces the number of redundant parameters and the model size with a minimal change in the performance of these lightweight models for a given task. This thesis work focused on developing deep learning architectures, strategies, corrections, and filter designs to improve the generalizability across imaging protocols/scanners and deployment devices. Even though these developments were problem specific, the methodology and/or framework should allow them to be applied to other medical image analysis tasks. Generalizability of deep learning models being the main bottleneck for clinical adaptability, this thesis work is a good step toward the same, and the solutions provided here will allow deep learning models to be used widely in clinical settings.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

Includes biblographical reference

PhD; 2023;Computational & Data Sciences

Medical imaging is a process of visualization of disease/tissue in a non-invasive manner. Several imaging techniques like computed tomography (CT), magnetic resonance imaging (MRI), optical coherence tomography (OCT), and ultrasound are being utilized for qualitative and quantitative diagnosis of various diseases. Fully data-driven deep learning methods have shown great promise in the automated analysis of medical images. However, access to large amounts of heterogeneous datasets (data availability), predictions across imaging protocols/scanners (domain adaptation), and computational resources (deployability) have been the major challenges in the clinical adaptation of deep learning methods for medical image analysis. This thesis work aims to design and develop generalizable deep learning models that will be able to address these challenges. Specifically, the thesis work focuses on various two- and three-dimensional medical image analysis tasks spanning chest computed tomography, spectral-domain optical coherence tomography, and quantitative susceptibility mapping in magnetic resonance imaging. (1) Chest Computed Tomography (CT) imaging has been indispensable for staging and managing coronavirus disease 2019 (COVID-19), and the evaluation of anomalies or abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 infection in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. This thesis work developed an anamorphic depth embedding-based lightweight convolutional neural network (CNN), called Anam-Net, to segment anomalies in COVID-19 chest CT images. The results from chest CT images across different imaging scanners showed that the proposed Anam-Net resulted in improved dice similarity scores for abnormal and normal regions of the lung. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (CovSeg) embedded with Anam-Net, to demonstrate its suitability for point-of-care platforms. (2) Optical Coherence Tomography (OCT) imaging has become a point-of-care imaging modality for diagnosing retinal diseases like diabetic macular edema, drusen, and choroidal neovascularization. Varying speckle noise in the spectral-domain OCT images across imaging protocols and scanners worsens the performance of existing deep learning models for predicting retinal diseases. Also, the existing deep learning models for retinal disease prediction are heavy and require a sophisticated computing environment to train and deploy. Generalized lightweight deep learning models that could work across varying noise levels in the data (different acquisition protocols) that can provide an automated diagnosis on an edge platform are highly appealing in the clinic. This thesis work developed noise regularized lightweight deep learning models trained via self distillation for improving the deployability and the generalizability of automated retinal diagnosis using OCT images. The developed approach was validated on simulated and real-time noisy OCT B-scans captured under different acquisition settings. The developed deep learning model has significantly outperformed the existing methods with improvement (as high as 14\%) in precision, accuracy, and F1-score to show that the self-distillation framework can provide more generalizability for automated retinal disease diagnosis. (3) Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique for quantifying the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although the existing deep learning based QSM methods can yield high-quality reconstructions, they are highly biased toward the training data distribution with less scope for generalizability. This thesis work developed a two-step reconstruction approach to improve the model-based methods, including deep learning based QSM reconstruction to reduce the inherent bias towards the training data distribution without adapting the model weights. The susceptibility map prediction obtained from model-based reconstruction methods was refined in the developed framework to ensure consistency with the measured local field. The developed method was also validated on existing deep learning models as well as other model-based approaches for QSM of the brain. The developed approach yielded improved reconstructions for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings. (4) Rolling Convolution Filters were developed as part of this thesis work to address the challenge of deep learning models with an extremely high number of parameters for a limited-class medical image analysis task. The aim here was to develop a generalizable task agnostic extremely lightweight deep learning models for medical image analysis. These deep learning models were built using a novel filter design element called rolling convolution filters. A set of new filters were generated by performing a channel-wise rolling (or circular shifting) operation on a single base filter. Each new filter developed was unique, but the number of learnable parameters was restricted to the base filter. The developed rolling convolutional filter design substantially reduces the number of redundant parameters and the model size with a minimal change in the performance of these lightweight models for a given task. This thesis work focused on developing deep learning architectures, strategies, corrections, and filter designs to improve the generalizability across imaging protocols/scanners and deployment devices. Even though these developments were problem specific, the methodology and/or framework should allow them to be applied to other medical image analysis tasks. Generalizability of deep learning models being the main bottleneck for clinical adaptability, this thesis work is a good step toward the same, and the solutions provided here will allow deep learning models to be used widely in clinical settings.

There are no comments on this title.

to post a comment.

                                                                                                                                                                                                    Facebook    Twitter

                             Copyright © 2023. J.R.D. Tata Memorial Library, Indian Institute of Science, Bengaluru - 560012

                             Contact   Phone: +91 80 2293 2832

Powered by Koha