Indonesian Plate Number Identification Using YOLACT and Mobilenetv2 in the Parking Management System
DOI:
https://doi.org/10.30595/juita.v9i1.9230Keywords:
ALPR, convolutional neural network, frame sampling, horizontal projection, YOLACTAbstract
A vehicle registration plate is used for vehicle identity. In recent years, technology to identify plate numbers automatically or known as Automatic License Plate Recognition (ALPR) has grown over time. Convolutional Neural Network and  YOLACT are used to do plate number recognition from a video. The number plate recognition process consists of 3 stages. The first stage determines the coordinates of the number plate area on a video frame using YOLACT. The second stage is to separate each character inside the plat number using morphological operations, horizontal projection, and topological structural. The third stage is recognizing each character candidate using CNN MobileNetV2. To reduce computation time by only take several frames in the video, frame sampling is performed. This experiment study uses frame sampling, YOLACT epoch, MobileNet V2 epoch, and the ratio of validation data as parameters. The best results are with 250ms frame sampling succeed to reduce computational times up to 78%, whereas the accuracy is affected by the MobileNetV2 model with 100 epoch and ratio of split data validation 0,1 which results in 83,33% in average accuracy. Frame sampling can reduce computational time however higher frame sampling value causes the system fails to obtain plate region area.References
[1] BPS, “Perkembangan Jumlah Kendaraan Bermotor Menurut Jenis, 1949-2017,” 2017. https://www.bps.go.id/linkTableDinamis/view/id/1133 (accessed Jan. 11, 2020).
[2] B. A. Fomani and A. Shahbahrami, “License plate detection using adaptive morphological closing and local adaptive thresholding,” in 2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA), Apr. 2017, pp. 146–150, doi: 10.1109/PRIA.2017.7983035.
[3] J. V. John, P. G. Raji, B. Radhakrishnan, and L. P. Suresh, “Automatic number plate localization using dynamic thresholding and morphological operations,” in 2017 International Conference on Circuit ,Power and Computing Technologies (ICCPCT), Apr. 2017, pp. 1–5, doi: 10.1109/ICCPCT.2017.8074328.
[4] N. Saleem, H. Muazzam, H. M. Tahir, and U. Farooq, “Automatic license plate recognition using extracted features,” in 2016 4th International Symposium on Computational and Business Intelligence (ISCBI), Sep. 2016, pp. 221–225, doi: 10.1109/ISCBI.2016.7743288.
[5] G. N. Balaji and D. Rajesh, “Smart Vehicle Number Plate Detection System for Different Countries Using an Improved Segmentation Method,” vol. 3, no. 6, p. 7, 2017.
[6] R. Laroca et al., “A Robust Real-Time Automatic License Plate Recognition Based on the YOLO Detector,” in 2018 International Joint Conference on Neural Networks (IJCNN), Jul. 2018, pp. 1–10, doi: 10.1109/IJCNN.2018.8489629.
[7] R.-C. Chen, “Automatic License Plate Recognition via sliding-window darknet-YOLO deep learning,” Image and Vision Computing, vol. 87, pp. 47–56, 2019.
[8] Z. Selmi, M. B. Halima, U. Pal, and A. M. Alimi, “DELP-DAR System for License Plate Detection and Recognition,” ArXiv, vol. abs/1910.01853, 2019.
[9] Y. Jamtsho, P. Riyamongkol, and R. Waranusast, “Real-time Bhutanese license plate localization using YOLO,” ICT Express, 2019.
[10] O. De Gaetano Ariel, D. F. Martín, and A. Ariel, “ALPR character segmentation algorithm,” Feb. 2018, pp. 1–4, doi: 10.1109/LASCAS.2018.8399954.
[11] K. P. P. Aung, K. H. Nwe, and A. Yoshitaka, “Automatic License Plate Detection System for Myanmar Vehicle License Plates,” in 2019 International Conference on Advanced Information Technologies (ICAIT), Nov. 2019, pp. 132–136, doi: 10.1109/AITC.2019.8921286.
[12] C. Szegedy et al., Going Deeper with Convolutions. 2014.
[13] D. Bolya, C. Zhou, F. Xiao, and Y. Lee, YOLACT: Real-time Instance Segmentation. 2019.
[14] Zihao Liu, Haiqin Xu, Yihong Zhang, Zhouyi Xu, Sen Wu, and Di Zhu, “A Real-Time Detection Drone Algorithm Based on Instance Semantic Segmentation,” presented at the Proceedings of the 3rd International Conference on Video and Image Processing, Shanghai, China, 2019.
[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016, pp. 770–778.
[16] S. Suzuki and others, “Topological structural analysis of digitized binary images by border following,” Computer vision, graphics, and image processing, vol. 30, no. 1, pp. 32–46, 1985.
[17] Tjokorda Agung Budi W., ST., MT., “Tel-U Vehicle License Plate Data-set V1.0.” Biometrics Research Center Laboratory, School of Computing, Telkom University Telekomunikasi No.1, Bandung, West Java, P.O.Box 40257, Indonesia, 2017, [Online]. Available: https://cokagung.staff.telkomuniversity.ac.id/koleksi-dataset/tel-u-vehicle-license-plate-data-set-v1-0/.
[18] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117–2125.
[19] W. Rawat and Z. Wang, “Deep convolutional neural networks for image classification: A comprehensive review,” Neural computation, vol. 29, no. 9, pp. 2352–2449, 2017.
[20] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018.
Downloads
Published
How to Cite
Issue
Section
License

JUITA: Jurnal Informatika is licensed under a Creative Commons Attribution 4.0 International License.