TY - JOUR
T1 - A systematic evaluation of shallow convolutional neural network on CIFAR dataset
AU - Rachmadi, Reza Fuad
AU - Eddy Purnama, I. Ketut
AU - Purnomo, Mauridhi Hery
AU - Hariadi, Mochamad
N1 - Publisher Copyright:
© International Association of Engineers.
PY - 2019/5/1
Y1 - 2019/5/1
N2 - Convolutional Neural Network (CNN) classifier is a very popular classifier used to solve many problems, including image classification and object recognition. The CNN classifier usually improved by designing a deeper and bigger classifier which needs more memory and computational power to run the classifier. In this paper, we analyze and optimize the use of small and shallow CNN classifier on CIFAR dataset. Karpathy ConvNetJS CIFAR10 model was used as a base network of our classifier and extended by adding max-min pooling method. The max-min pooling is used to explore the negative and positive response of the convolution process which in theory will be trained the classifier more effectively. We choose several different configurations to analyze the effectiveness of the classifier by combining the training algorithm, batch normalization configuration, weights initialization methods, dropout regularization configuration, and heavy data augmentation. To ensure that the classifier we designed is still small and shallow CNN classifier, we limit the maximum number of layers in our CNN classifier to 15 layers. Experiments on CIFAR10 and CIFAR100 dataset shows that by compacting the kernel on each layer, the classifier can achieve good accuracy and comparable with another state-of-the-art classifier with a relatively same number of layers with an error rate of 6.99% on the CIFAR10 dataset and 29.41% on the CIFAR100 dataset.
AB - Convolutional Neural Network (CNN) classifier is a very popular classifier used to solve many problems, including image classification and object recognition. The CNN classifier usually improved by designing a deeper and bigger classifier which needs more memory and computational power to run the classifier. In this paper, we analyze and optimize the use of small and shallow CNN classifier on CIFAR dataset. Karpathy ConvNetJS CIFAR10 model was used as a base network of our classifier and extended by adding max-min pooling method. The max-min pooling is used to explore the negative and positive response of the convolution process which in theory will be trained the classifier more effectively. We choose several different configurations to analyze the effectiveness of the classifier by combining the training algorithm, batch normalization configuration, weights initialization methods, dropout regularization configuration, and heavy data augmentation. To ensure that the classifier we designed is still small and shallow CNN classifier, we limit the maximum number of layers in our CNN classifier to 15 layers. Experiments on CIFAR10 and CIFAR100 dataset shows that by compacting the kernel on each layer, the classifier can achieve good accuracy and comparable with another state-of-the-art classifier with a relatively same number of layers with an error rate of 6.99% on the CIFAR10 dataset and 29.41% on the CIFAR100 dataset.
KW - CIFAR dataset
KW - Deep convolutional neural network
KW - Max-min pooling
KW - Shallow CNN classifier
UR - http://www.scopus.com/inward/record.url?scp=85066253516&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85066253516
SN - 1819-656X
VL - 46
SP - 365
EP - 376
JO - IAENG International Journal of Computer Science
JF - IAENG International Journal of Computer Science
IS - 2
ER -