YOLOV4 Deepsort ANN for Traffic Collision Detection
DOI:
https://doi.org/10.23887/janapati.v12i3.62923Keywords:
traffic collision detection, YOLOv4, DeepSort, object detection, object trackingAbstract
Every collision must be handled right away to prevent further harm, damage, and traffic bottlenecks. Hence, the implementation of a systematic approach for accident detection becomes imperative to expedite response mechanisms. Our proposed accident detection system operates in three stages, encompassing vehicle object detection, multiple object tracking, and vehicle interaction analysis. YOLOv4 is employed for object detection, while DeepSort is utilized to the tracking of multiple vehicle objects. Subsequently, the positional and interactional data of each object within the video frame undergo thorough analysis to identify collisions, utilizing an Artificial Neural Network (ANN). Notably, collisions involving a single vehicle and not affecting other road users are excluded from the scope of this study. The evaluation of our approach reveals that the ANN model achieves a commendable F-Measure of 0.97 for detecting objects without collisions and 0.88 for objects involved in collisions, based on the conducted tests.
References
B. P. STATISTIK, “Indonesia | English Jumlah Kecelakaan, Koban Mati, Luka Berat, Luka Ringan , dan Kerugian Materi yang Diderita Tahun 1992-2018,” 2020.
N. K. Jain, R. K. Saini, and P. Mittal, “A review on traffic monitoring system techniques,” in Advances in Intelligent Systems and Computing, Springer Verlag, 2019, pp. 569–577. doi: 10.1007/978-981-13-0589-4_53.
K. K. Santhosh, D. P. Dogra, and P. P. Roy, “Anomaly Detection in Road Traffic Using Visual Surveillance: A Survey,” ACM Computing Surveys, vol. 53, no. 6. Association for Computing Machinery, Feb. 01, 2021. doi: 10.1145/3417989.
A. Fedorov, K. Nikolskaia, S. Ivanov, V. Shepelev, and A. Minbaleev, “Traffic flow estimation with data from a video surveillance camera,” J Big Data, vol. 6, no. 1, Dec. 2019, doi: 10.1186/s40537-019-0234-z.
N. Ran, L. Kong, Y. Wang, and Q. Liu, “A robust multi-athlete tracking algorithm by exploiting discriminant features and long-term dependencies,” in International Conference on Multimedia Modeling, Springer, 2019, pp. 441–423.
D. Yi, J. Su, and W. H. Chen, “Probabilistic faster R-CNN with stochastic region proposing: Towards object detection and recognition in remote sensing imagery,” Neurocomputing, vol. 459, pp. 290–301, Oct. 2021, doi: 10.1016/j.neucom.2021.06.072.
N. Pathik, R. K. Gupta, Y. Sahu, A. Sharma, M. Masud, and M. Baz, “AI Enabled Accident Detection and Alert System Using IoT and Deep Learning for Smart Cities,” Sustainability, vol. 14, no. 13, p. 7701, Jun. 2022, doi: 10.3390/su14137701.
D. Yang, Y. Wu, F. Sun, J. Chen, D. Zhai, and C. Fu, “Freeway accident detection and classification based on the multi-vehicle trajectory data and deep learning model,” Transp Res Part C Emerg Technol, vol. 130, p. 103303, Sep. 2021, doi: 10.1016/j.trc.2021.103303.
E. P. Ijjina, D. Chand, S. Gupta, K. Goutham, and B. Tech, “Computer Vision-based Accident Detection in Traffic Surveillance,” 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2019, doi: 10.1109/ICCCNT45670.2019.8944469.
M. A. A. Al-qaness, A. A. Abbasi, H. Fan, R. A. Ibrahim, S. H. Alsamhi, and A. Hawbani, “An improved YOLO-based road traffic monitoring system,” Computing, vol. 103, no. 2, pp. 211–230, Feb. 2021, doi: 10.1007/s00607-020-00869-8.
C. Dewi, R. C. Chen, X. Jiang, and H. Yu, “Deep convolutional neural network for enhancing traffic sign recognition developed on Yolo V4,” Multimed Tools Appl, vol. 81, no. 26, pp. 37821–37845, Nov. 2022, doi: 10.1007/s11042-022-12962-5.
C. Dewi, R. C. Chen, Y. T. Liu, X. Jiang, and K. D. Hartomo, “Yolo V4 for Advanced Traffic Sign Recognition with Synthetic Training Data Generated by Various GAN,” IEEE Access, vol. 9, pp. 97228–97242, 2021, doi: 10.1109/ACCESS.2021.3094201.
N. An and W. Q. Yan, “Multitarget Tracking Using Siamese Neural Networks,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 17, no. 2s, Jun. 2021, doi: 10.1145/3441656.
A. A. Micheal, K. Vani, S. Sanjeevi, and C. H. Lin, “Object Detection and Tracking with UAV Data Using Deep Learning,” Journal of the Indian Society of Remote Sensing, vol. 49, no. 3, pp. 463–469, Mar. 2021, doi: 10.1007/s12524-020-01229-x.
N. Marinello et al., “TripletTrack: 3D Object Tracking using Triplet Embeddings and LSTM,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 4499–4509, 2022, doi: 10.1109/CVPRW56347.2022.00496.
B. Shuai, A. G. Berneshawi, D. Modolo, and J. Tighe, “Multi-Object Tracking with Siamese Track-RCNN,” Apr. 2020, [Online]. Available: http://arxiv.org/abs/2004.07786
N. S. Inthizami et al., “Flood video segmentation on remotely sensed UAV using improved Efficient Neural Network,” ICT Express, vol. 8, no. 3, pp. 347–351, Sep. 2022, doi: 10.1016/j.icte.2022.01.016.
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” Apr. 2020, [Online]. Available: http://arxiv.org/abs/2004.10934
N. Wojke, A. Bewley, and D. Paulus, “Simple online and realtime tracking with a deep association metric,” in 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 2017, pp. 3645–3649.
A. Pujara and M. Bhamare, “DeepSORT: Real Time & Multi-Object Detection and Tracking with YOLO and TensorFlow,” in Proceedings - International Conference on Augmented Intelligence and Sustainable Systems, ICAISS 2022, Institute of Electrical and Electronics Engineers Inc., 2022, pp. 456–460. doi: 10.1109/ICAISS55157.2022.10011018.
X. Hou, Y. Wang, and L.-P. Chau, “Vehicle Tracking Using Deep SORT with Low Confidence Track Filtering,” 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2019.
G. Desai, V. Ambre, S. Jakharia, and S. Sherkhane, “Smart Road Surveillance Using Image Processing,” in 2018 International Conference on Smart City and Emerging Technology (ICSCET), Mumbai, India, 2018, pp. 1–5.
T.-Y. Lin et al., “Microsoft COCO: Common Objects in Context,” in In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds) Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol. 8693, Springer, Cham., 2014.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, 2016, pp. 779–788. doi: doi: 10.1109/CVPR.2016.91.
C. -Y. Wang, H. -Y. Mark Liao, Y. -H. Wu, P. -Y. Chen, J. -W. Hsieh, and I. -H. Yeh, “CSPNet: A New Backbone that can Enhance Learning Capability of CNN,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp. 1571–1580.
J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” Apr. 2018, [Online]. Available: http://arxiv.org/abs/1804.02767
C. Y. Wang, A. Bochkovskiy, and H. Y. M. Liao, “Scaled-yolov4: Scaling cross stage partial network.,” in In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA: IEEE, Jun. 2021, pp. 13024–13033.
K. B. Lee and H. S. Shin, “An Application of a Deep Learning Algorithm for Automatic Detection of Unexpected Accidents Under Bad CCTV Monitoring Conditions in Tunnels,” in 2019 International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML), Istanbul, Turkey, 2019, pp. 7–11.
M. Yazdi and T. Bouwmans, “New trends on moving object detection in video images captured by a moving camera: A survey,” Comput Sci Rev, vol. 28, pp. 157–177, 2018.
J. Zhang, J. Sun, J. Wang, and X. G. Yue, “Visual object tracking based on residual network and cascaded correlation filters,” J Ambient Intell Humaniz Comput, vol. 12, no. 8, pp. 8427–8440, Aug. 2021, doi: 10.1007/s12652-020-02572-0.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Arliyanti Nurdin, Bernadus Seno Aji, Yupit Sudianto, Mardhiyyah Rafrin
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with Janapati agree to the following terms:- Authors retain copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC BY-SA 4.0) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work. (See The Effect of Open Access)