HOMEPRODUCTSCOMPANYCONTACTFAQResearchDictionaryPharmaSign Up FREE or Login

Multimodal MRI Image Decision Fusion-Based Network for Glioma Classification.

AbstractPurpose:
Glioma is the most common primary brain tumor, with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtype classification using multimodal MRI images.
Method:
MRI images of four modalities for 221 glioma patients were collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) MRI images, to classify astrocytoma, oligodendroglioma, and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. First, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max-min normalization. Subsequently, a deep learning-based network was designed based on a unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on a decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in the training stage. Finally, the performance of our method was evaluated in terms of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc.
Results:
The proposed method achieved an accuracy of 0.878, an AUC of 0.902, a sensitivity of 0.772, a specificity of 0.930, a PPV of 0.862, an NPV of 0.949, and a Cohen's Kappa of 0.773, which showed a significantly higher performance than existing state-of-the-art methods.
Conclusion:
Compared with current studies, this study demonstrated the effectiveness and superiority in the overall performance of our proposed multimodal MRI image decision fusion-based network method for glioma subtype classification, which would be of enormous potential value in clinical practice.
AuthorsShunchao Guo, Lihui Wang, Qijian Chen, Li Wang, Jian Zhang, Yuemin Zhu
JournalFrontiers in oncology (Front Oncol) Vol. 12 Pg. 819673 ( 2022) ISSN: 2234-943X [Print] Switzerland
PMID35280828 (Publication Type: Journal Article)
CopyrightCopyright © 2022 Guo, Wang, Chen, Wang, Zhang and Zhu.

Join CureHunter, for free Research Interface BASIC access!

Take advantage of free CureHunter research engine access to explore the best drug and treatment options for any disease. Find out why thousands of doctors, pharma researchers and patient activists around the world use CureHunter every day.
Realize the full power of the drug-disease research graph!


Choose Username:
Email:
Password:
Verify Password:
Enter Code Shown: