Performance Evaluation of Pre-Trained Convolutional Neural Network Model for Skin Disease Classification
Abstract
Indonesia is a tropical country that has various skin diseases. Tinea versicolor, ringworm, and scabies are the most common types of skin diseases suffered by the people of Indonesia. The classification of the three skin diseases can be automatically completed by artificial intelligence and deep learning technology because the classification process using an expert will require a lot of money and time. The challenge in classifying skin diseases is in the process of collecting data. Because health data cannot be obtained freely, there must be approval from the patient or hospital. Therefore, to overcome the limited amount of data, Pre-Trained CNN is used. The Pre-Trained CNN model has many patterns from thousands of images, so we do not need many images to train the model. In this study, a comparison of five pre-trained CNN models was conducted, namely VGGNet16, MobileNetV2, InceptionResNetV2, ResNet152V2, and DenseNet201. The aim is to find out which CNN model can produce the best performance in classifying skin diseases with a limited amount of image data. The test results show that the ResNet152V2 model has the best classification ability with the highest accuracy, precision, recall, and F1 score values, namely 95.84%, 0.963, 0.96, and 0.956. As for the training execution time, the ResNet152V2 model has the fastest time to achieve 95% accuracy. That's happened because the addition of the dropout parameter is 20%.
Keywords
References
[1] M. K. Hasan, M. A. Alam, D. Das, E. Hossain, and M. Hasan, “Diabetes prediction using ensembling of different machine learning classifiers,” IEEE Access, vol. 8, pp. 76516–76531, 2020.
[2] D. Sisodia and D. S. Sisodia, “Prediction of Diabetes using Classification Algorithms,” Procedia Comput. Sci., vol. 132, no. Iccids, pp. 1578–1585, 2018.
[3] J. Brownlee, Deep Learning for Computer Vision: Image Classification, Object Detection, and Face Recognition in Python, 1.8. Machine Learning Mastery, 2019.
[4] D. Haritha, N. Swaroop, and M. Mounika, “Prediction of COVID-19 Cases Using CNN with X-rays,” Proc. 2020 Int. Conf. Comput. Commun. Secur. ICCCS 2020, 2020.
[5] M. Heidari, S. Mirniaharikandehei, A. Z. Khuzani, G. Danala, Y. Qiu, and B. Zheng, “Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms,” Int. J. Med. Inform., vol. 144, no. September, p. 104284, 2020.
[6] S. Rajaraman et al., “Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images,” PeerJ, vol. 2018, no. 4, pp. 1–17, 2018.
[7] Z. Akkus et al., “Predicting 1p19q Chromosomal Deletion of Low-Grade Gliomas from MR Images using Deep Learning,” pp. 1–7, 2016, [Online]. Available: http://arxiv.org/abs/1611.06939. [Accessed 6 December 2021].
[8] E. S. S. Daili, Sri Linuwih Menaldi, and I Made Wisnu, Penyakit Kulit yang Umum di Indonesia, I. Jakarta: PT Medical Multimedia Jakarta, 2006.
[9] T. Shanthi, R. S. Sabeenian, and R. Anand, “Automatic Diagnosis of Skin Diseases Using Convolution Neural Network,” Microprocess. Microsyst., vol. 76, p. 103074, 2020.
[10] N. Hameed, A. M. Shabut, and M. A. Hossain, “Multi-Class Skin Diseases Classification Using Deep Convolutional Neural Network and Support Vector Machine,” in International Conference on Electronics, Communication and Aerospace Technology (ICECA 2018), 2019, pp. 1–7.
[11] J. Velasco et al., “A Smartphone-Based Skin Disease Classification Using MobileNet CNN,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 8, no. October, pp. 2–8, 2019.
[12] O. Rochmawanti, F. Utaminingrum, and F. A. Bachtiar, “Analisis Performa Pre-Trained Model Convolutional Neural Network dalam Mendeteksi Penyakit Tuberkulosis,” J. Teknol. Inf. dan Ilmu Komput., vol. 8, no. 4, p. 805, 2021.
[13] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” Artif. Intell. Rev., vol. 53, no. 8, pp. 5455–5516, 2020.
[14] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in International Conference on Learning Representations 2015, 2014, pp. 1–14.
[15] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520.
[16] Y. Hu, A. Huber, J. Anumula, and S.-C. Liu, “Overcoming the vanishing gradient problem in plain recurrent networks,” 2018, no. Section 2, pp. 1–20.
[17] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 770–778.
[18] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017-Janua, pp. 2261–2269.
[19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 2818–2826.
[20] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427–437, 2009.
[21] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” in International Conference on Learning Representations 2017, 2017, pp. 1–9.
[22] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res., vol. 15, no. 56, p. 1929−1958, 2014.
DOI: 10.30595/juita.v10i1.12041
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 4.0 International License.
ISSN: 2579-8901
- Visitor Stats
View JUITA Stats