Commonly, clinicans have problems for recognising brain stroke injury images. However, with the advantages of Information technology it is expected that will be a new method that can support the clinicans’ opinion for recognising the brain stroke injury for type of stroke (hemorrhagic, ischemic, and normal). Therefore, this study aim is to discovery a new model to classify hemorrhagic, ischemic and normal based on Diffusion Weighted (DW)- Magnetic Resonance (MR) images. This study argues by using Qual Convolutional Layers (QCL-CNN) which applied in CNN can classified type of stroke. For this study experiment, this research conducted two experiment to asses the performance of QCL-CNN. The first experiments partitioned the MR image dataset into 20 percent testing and 80 percent training sets. Then, the second testing performed ten-fold cross-validation on the image dataset. The result from the first experiment of the classification accuracies obtained 93.90 percent (1st dataset) and 94.96 percent (2nd dataset). As for the second experiment, the results shows that the classification accuracies obtained 95.91 percent ( 1st data set) and 97.31 percent ( 2nd data set). The data source for this study gained from Indonesian hospital and the web sources dataset public from Ischemic Stroke Lesion Segmentation (ISLES). This study also compared, the QCL-CNN model with other architecture model such as AlexNet, ResNet50, and VGG16. The result of the comparison experiment shows that QCL-CNN architectures model has excellent performance than the others model.

Original languageEnglish
Article number0228
JournalInternational Journal of Intelligent Engineering and Systems
Issue number1
Publication statusPublished - 2022


  • Brain stroke injury
  • Classification
  • Dw-mri
  • Image
  • Preprocessing
  • Qcl-cnn


Dive into the research topics of 'Quad Convolutional Layers (QCL) CNN Approach for Classification of Brain Stroke in Diffusion Weighted (DW) - Magnetic Resonance Images (MRI)'. Together they form a unique fingerprint.

Cite this