AED-Net: Attention-Based Detection Model for Disabled Signage Detection 


Vol. 49,  No. 7, pp. 976-982, Jul.  2024
10.7840/kics.2024.49.7.976


PDF
  Abstract

The aim of having designated parking spaces for individuals with disabilities is to ensure that only vehicles with proper handicapped signage use them, while preventing unauthorized vehicles from occupying those spaces. To achieve this, real-time monitoring is essential. Existing two-stage object detection models suffer from slow image processing and enhanced backbones with feature pyramid networks are also burdened with expanded parameters. While YOLOv5 model is a compelling choice due to its superior speed and performance compared to existing models. Therefore, this study proposes to make certain modifications to a baseline YOLOv5 model. Instead of the original 9 blocks in the backbone and 4 C3 blocks, we propose to replace them with 6 and 4 EfficientNet blocks, accordingly. These EfficientNet blocks have fewer parameters but still offer higher accuracy in detecting disabled signs, among other types of signs on car windshields. To make up for the reduced number of blocks, we have incorporated an attention mechanism into the proposed architecture before the detection phase. This mechanism enables the model to focus on the crucial regions required for the task. Furthermore, we propose utilizing a more advanced optimizer called AdamW to prevent overfitting. With these enhancements, a novel object detector, attention-based efficient detection model (AED-Net) is proposed. To assess the effectiveness of the proposed approach, we will gather and label a dataset comprising images of cars displaying disabled signage on their windshields. Experiments conducted using this dataset demonstrate that the proposed model achieves a superior F1 score of 0.73 compared to that of baseline model, 0.57. The proposed model utilizes 10 percent fewer parameters compared to the baseline model.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

A. A. U. Rakhmonov, B. Subramanian, B. A. Varnousefaderani, J. Kim, "AED-Net: Attention-Based Detection Model for Disabled Signage Detection," The Journal of Korean Institute of Communications and Information Sciences, vol. 49, no. 7, pp. 976-982, 2024. DOI: 10.7840/kics.2024.49.7.976.

[ACM Style]

Akhrorjon Akhmadjon Ugli Rakhmonov, Barathi Subramanian, Bahar Amirian Varnousefaderani, and Jeonghong Kim. 2024. AED-Net: Attention-Based Detection Model for Disabled Signage Detection. The Journal of Korean Institute of Communications and Information Sciences, 49, 7, (2024), 976-982. DOI: 10.7840/kics.2024.49.7.976.

[KICS Style]

Akhrorjon Akhmadjon Ugli Rakhmonov, Barathi Subramanian, Bahar Amirian Varnousefaderani, Jeonghong Kim, "AED-Net: Attention-Based Detection Model for Disabled Signage Detection," The Journal of Korean Institute of Communications and Information Sciences, vol. 49, no. 7, pp. 976-982, 7. 2024. (https://doi.org/10.7840/kics.2024.49.7.976)
Vol. 49, No. 7 Index