Abstract
Background and objective
Malignant primary brain tumors cause the greatest number of years of life lost than any other cancer. Grade 4 glioma is particularly devastating: The median survival without any treatment is less than six months and with standard-of-care treatment is only 14.6 months. Accurate identification of the overall survival time of patients with brain tumors is of profound importance in many clinical applications. Automated image analytics with magnetic resonance imaging (MRI) can provide insights into the prognosis of patients with brain tumors.
Methods
In this paper, We propose SurvNet, a low-complexity deep learning architecture based on the convolutional neural network to classify the overall survival time of patients with brain tumors into long-time and short-time survival cohorts. Through the incorporation of diverse MRI modalities as inputs, we facilitate deep feature extraction at various anatomical sites, thereby augmenting the precision of predictive modeling. We compare SurvNet with the Inception V3, VGG 16 and ensemble CNN models on pre-operative magnetic resonance image datasets. We also analyzed the effect of segmented brain tumors and training data on the system performance.
Results
Several measures, such as accuracy, precision, and recall, are calculated to examine the perfor-mance of SurvNet on three-fold cross-validation. SurvNet with T1 MRI modality achieved a 62.7 % accuracy, compared with 52.9 % accuracy of the Inception V3 model, 58.5 % accuracy of the VGG 16 model, and 54.9 % of the ensemble CNN model. By increasing the MRI input modalities, SurvNet becomes more accurate and achieves 76.5 % accuracy with four MRI modalities. Combining the segmented data, SurvNet achieved the highest accuracy of 82.4 %.
Conclusions
The research results show that SurvNet achieves higher metrics such as accuracy and f1-score than the comparisons. Our research also proves that by using multiparametric MRI modalities, SurvNet is able to learn more image features and performs a better classification accuracy. We can conclude that SurvNet with the complete scenario, i.e., segmented data and four MRI modalities, achieved the best accuracy, showing the validity of segmentation information during the survival time prediction process.
Malignant primary brain tumors cause the greatest number of years of life lost than any other cancer. Grade 4 glioma is particularly devastating: The median survival without any treatment is less than six months and with standard-of-care treatment is only 14.6 months. Accurate identification of the overall survival time of patients with brain tumors is of profound importance in many clinical applications. Automated image analytics with magnetic resonance imaging (MRI) can provide insights into the prognosis of patients with brain tumors.
Methods
In this paper, We propose SurvNet, a low-complexity deep learning architecture based on the convolutional neural network to classify the overall survival time of patients with brain tumors into long-time and short-time survival cohorts. Through the incorporation of diverse MRI modalities as inputs, we facilitate deep feature extraction at various anatomical sites, thereby augmenting the precision of predictive modeling. We compare SurvNet with the Inception V3, VGG 16 and ensemble CNN models on pre-operative magnetic resonance image datasets. We also analyzed the effect of segmented brain tumors and training data on the system performance.
Results
Several measures, such as accuracy, precision, and recall, are calculated to examine the perfor-mance of SurvNet on three-fold cross-validation. SurvNet with T1 MRI modality achieved a 62.7 % accuracy, compared with 52.9 % accuracy of the Inception V3 model, 58.5 % accuracy of the VGG 16 model, and 54.9 % of the ensemble CNN model. By increasing the MRI input modalities, SurvNet becomes more accurate and achieves 76.5 % accuracy with four MRI modalities. Combining the segmented data, SurvNet achieved the highest accuracy of 82.4 %.
Conclusions
The research results show that SurvNet achieves higher metrics such as accuracy and f1-score than the comparisons. Our research also proves that by using multiparametric MRI modalities, SurvNet is able to learn more image features and performs a better classification accuracy. We can conclude that SurvNet with the complete scenario, i.e., segmented data and four MRI modalities, achieved the best accuracy, showing the validity of segmentation information during the survival time prediction process.
Original language | English |
---|---|
Article number | e32870 |
Pages (from-to) | e32870 |
Journal | Heliyon |
Volume | 10 |
Issue number | 12 |
Early online date | 12 Jun 2024 |
DOIs | |
Publication status | Published - 30 Jun 2024 |