| تعداد نشریات | 45 |
| تعداد شمارهها | 1,446 |
| تعداد مقالات | 17,732 |
| تعداد مشاهده مقاله | 57,919,957 |
| تعداد دریافت فایل اصل مقاله | 19,533,958 |
آشکارسازی خط مسیر حرکت خودرو در حضور اختفاء با استفاده از شبکه عصبی عمیق | ||
| مجله مهندسی برق دانشگاه تبریز | ||
| دوره 55، شماره 4 - شماره پیاپی 114، دی 1404، صفحه 697-706 اصل مقاله (690.49 K) | ||
| نوع مقاله: علمی-پژوهشی | ||
| شناسه دیجیتال (DOI): 10.22034/tjee.2025.66183.4982 | ||
| نویسندگان | ||
| علیرضا رحیمی بیدمشگی؛ علیرضا بهراد* | ||
| دانشکده فنی مهندسی، دانشگاه شاهد | ||
| چکیده | ||
| آشکارسازی خطوط مسیر حرکت خودرو یکی از ا جزاء مهم در توسعه خودروهای خودران است که امکان شناسایی مسیرهای رانندگی به صورت بیدرنگ و رعایت مقررات ترافیکی را فراهم میسازد. با وجود عملکرد امیدوارکننده مدلهای فعلی در محیطهای کنترلشده، این مدلها اغلب با چالشهای قابل توجهی در سناریوهای واقعی مانند کاهش دید خطوط به دلیل برف، گرد و غبار، ترافیک یا عدم وجود علامتگذاری خطوط مواجه میشوند. این مقاله یک روش جدید برای آشکارسازی خطوط مسیر حرکت خودرو ارائه میدهد که ویژگیهای مکانی-زمانی فریمهای ویدیویی را با استفاده از شبکههای حافظه کوتاه بلندمدت (LSTM) و شبکههای عصبی کانولوشنال (CNN) ترکیب میکند تا عملکرد را در شرایط وجود اختفاء بهبود بخشد. با نمایش تشخیص خطوط به عنوان یک مسئله یادگیری متوالی، شبکه ترکیبی CNN-LSTM به طور موثری ویژگیهای مکانی-زمانی را استخراج میکند. این معماری اطلاعات مکانی و زمانی را ادغام میکند و در نتیجه مقاومت در برابر اختفاء و شرایط نوری متغیر را افزایش میدهد. مدل پیشنهادی تحت دو شرایط اختفادی کم و زیاد و با استفاده از مجموعهدادههای مختلف، ارزیابی شد. و با معماری پایه مقایسه شد که نتایج، اثربخشی روش پیشنهادی را تأیید میکند. در شرایط کم اختفاء، مدل به امتیاز F1 حدود ۹۶٪ دست مییابد که مشابه روش پایه است. در مقابل، مدل پایه در شرایط موانع زیاد با کاهش عملکرد مواجه میشود، در حالی که مدل پیشنهادی مقاوم باقی میماند و به امتیاز F1 حدود ۹۶٪ دست مییابد. | ||
| کلیدواژهها | ||
| آشکارسازی خطوط مسیر حرکت خودرو؛ مدلهای عمیق؛ LSTM؛ Resnet؛ اختفای زیاد | ||
| مراجع | ||
|
[1] Ahmadyfard, "Vehicle Dimensions and Speed Estimation using Camera Calibration Based on Recognition of a Number of Common Cars by VGG Network,” Tabriz Journal of Electrical Engineering, vol. 50, no. 2, pp. 777-788, 2020. [2] A. Eslami Mehdi Abadi, F. Mohanna, “Investigation of the noisy image edge detection based on the GWO algorithm,” Tabriz Journal of Electrical Engineering 2024. [3] N.S. Aminuddin, M.M. Ibrahim, N.M. Ali, S.A. Radzi, W.H.M. Saad, and A.M. Darsono, “A new approach to highway lane detection by using Hough transform technique”, Journal of Information and Communication Technology, vol. 16, no. 2, pp.244-260, 2017. [4] T. Ganokratanaa, M. Ketcham, and S. Sathienpong, "Real-time lane detection for driving system using image processing based on edge detection and Hough transform," In International Conference on Digital Information and Communication Technology and its Applications, 2013. [5] S. Nie, G. Zhang, L. Yun, and S. Liu, “A Faster and Lightweight Lane Detection Method in Complex Scenarios. Electronics,” vol. 13, no. 13, p.2486, 2024. [6] Qin, Z., P. Zhang, and X. Li, “Ultra fast deep lane detection with hybrid anchor driven ordinal classification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 5, pp. 2555-2568, 2022. [7] M. Bertozzi and A. Broggi, “Gold: A parallel real-time stereo vision system for generic obstacle and lane detection”, IEEE Transactions on Image Processing, vol. 7, no. 1, pp. 62–81, 1998. [8] T.-Y. Sun, S.-J. Tsai and V. Chan, “HSI color model based lanemarking detection”, In IEEE Intelligent Transportation Systems Conference (ITSC), 2006, pp. 1168–1172. [9] B. Yu and A. K. Jain, “Lane boundary detection using a multiresolution hough transform”, In IEEE International Conference on Image Processing, vol. 2, 1997, pp. 748–751. [10] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model”, Pattern Recognition Letter, vol. 21, no. 8, pp. 677–689, 2000. [11] M. Aly, “Real time detection of lane markers in urban streets,” In IEEE Intelligent Vehicles Symposium, 2008, pp. 7–12. [12] Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using b-snake”, Image and Vision Computing, vol. 22, no. 4, pp. 269–280, 2004. [13] Z. Kim, “Robust lane detection and tracking in challenging scenarios”, IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 16–26, 2008. [14] P. Krahenb ¨ uhl and V. Koltun, “Efficient inference in fully connected CRFs with Gaussian edge potentials,” In Advances in Neural Information Processing Systems, 2011, pp. 109–117. [15] K. Kluge and S. Lakshmanan, “A deformable-template approach to lane detection”, In IEEE Intelligent Vehicles Symposium, 1995, pp. 54–59. [16] J. P. Gonzalez and U. Ozguner, “Lane detection using histogram based segmentation and decision trees,” In IEEE Intelligent Transportation Systems Conference (ITSC), 2000, pp. 346–351. [17] H.M. Mandalia and M.D.D. Salvucci, “Using support vector machines for lane-change detection,” In Proc. Hum. Factors Ergon. Soc. Annu. Meet., vol. 49, no. 22, 2005, pp. 1965–1969. [18] S. Lee, J. Kim, J. Shin Yoon, S. Shin, O. Bailo, N. Kim, T.-H. Lee, H. Seok Hong, S.-H. Han, and I. So Kweon, “VPGNET: Vanishing point guided network for lane and road marking detection and recognition,” In International Conference on Computer Vision, Oct. 2017, pp. 1947–1955. [19] X. Pan, J. Shi, P. Luo, X. Wang, and X. Tng, “Spatial as deep: Spatial CNN for traffic scene understanding,” In AAAI, 2018, pp. 7276–7284. [20] Z. M. Chng, J. M. H. Lew, and J. A. Lee, “Roneld: Robust neural network output enhancement for active lane detection”, In 25th International Conference on Pattern Recognition (ICPR), IEEE, 2020, pp. 6842-6849. [21] T. Zheng, H. Fang, Y. Zhang, W. Tang, Z. Yang, H. Liu, and D. Cai, “Resa: Recurrent feature-shift aggregator for lane detection”, In Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 4, 2020, pp. 3547-3554. [22] Y. Hou, Z. Ma, C. Liu, and C. C. Loy, “Learning lightweight lane detection CNNs by self attention distillation,” In International Conference on Computer Vision (ICCV), 2019, pp. 1013–1021. [23] Y. Hou, Z. Ma, C. Liu, T.-W. Hui, and C. C. Loy, “Inter-region affinity distillation for road marking segmentation”, In IEEE Conference on Computer Vision and Pattern Recognition, June 2020. [24] H. Xu, S. Wang, X. Cai, W. Zhang, X. Liang, and Z. Li, “CurveLane-NAS: Unifying lane-sensitive architecture search and adaptive point [25] H. Abualsaud, S. Liu, D. Lu, K. Situ, A. Rangesh, and M. M. Trivedi, “Lane-AF: Robust multi-lane detection with affinity fields,” "Laneaf: Robust multi-lane detection with affinity fields", IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7477-7484, 2021. [26] Z. Qu, H. Jin, Y. Zhou, Z. Yang, and W. Zhang, “Focus on local: Detecting lane marker from bottom up via key point,” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition 2021, pp. 14122-14130. [27] J. Li, X. Mei, D. Prokhorov, and D. Tao, “Deep neural network for structural prediction and lane detection in traffic scene,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 690–703, 2016. [28] J. Philion, “Fastdraw: Addressing the long tail of lane detection by adapting a sequential prediction network,” In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 11 582–11 591. [29] Y.-C. Hsu, Z. Xu, Z. Kira, and J. Huang, “Learning to cluster for proposal-free instance segmentation,” In International Joint Conference on Neural Networks, 2018, pp. 1–8. [30] L. Tabelini, R. Berriel, T. M. Paixao, C. Badue, A. F. De Souza, and T. Oliveira-Santos, “Polylanenet: Lane estimation via deep polynomial regression,” In IEEE International Conference on Pattern Recognition, 2021, pp.6150–6156. [31] W. Van Gansbeke, B. De Brabandere, D. Neven, M. Proesmans, and L. Van Gool, “End-to-end lane detection through differentiable least-squares fitting,” In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019. [32] R. Liu, Z. Yuan, T. Liu, and Z. Xiong, “End-to-end lane shape prediction with transformers,” In Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 3694–3702. [33] L. Tabelini, R. Berriel, T.M. Paixao, C. Badue, A.F. De Souza, and T. Oliveira-Santos, “Keep your eyes on the lane: Real-time attention-guided lane detection,” In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 294-302. [35] J. Su, C. Chen, K. Zhang, J. Luo, X. Wei, and X. Wei, “Structure guided lane detection,” arXiv preprint arXiv:2105.05403, 2021. [36] N. Garnett, R. Cohen, T. Pe’er, R. Lahav, and D. Levi, “3D-Lanenet: end-to-end 3D multiple lane detection,” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2921–2930. [37] Y. Guo, G. Chen, P. Zhao, W. Zhang, J. Miao, J. Wang, and T. E. Choe, “Gen-lanenet: A generalized and scalable approach for 3D lane detection,” In 16th European Conference on Computer Vision (ECCV), pp. 666-681. [38] N. Efrat, M. Bluvstein, N. Garnett, D. Levi, S. Oron, and B. E. Shlomo, “Semi-local 3d lane detection and uncertainty estimation,” arXiv preprint arXiv:2003.05257, 2020. [39] N. E. Sela, M. Bluvstein, S. Oron, D. Levi, N. Garnett, and B. E. Shlomo, “3D-lanenet+: Anchor free lane detection using a semilocal representation,” arXiv preprint arXiv:2011.01535, 2020. [40] R. Wang, J. Qin, K. Li, Y. Li, D. Cao, and J. Xu, "Bev-lanedet: An efficient 3D lane detection based on virtual camera via key-points," In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1002-1011. [41] Z. Qin, H. Wang, and X. Li, "Ultra fast structure-aware deep lane detection," In 16th European Conference Computer Vision (ECCV) 2020,, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIV 16, 2020: Springer, pp. 276-291. [42] H. Honda and Y. Uchida, "CLRerNet: improving confidence of lane detection with LaneIoU," In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 1176-1185. [43] S. Sultana, B. Ahmed, M. Paul, M. R. Islam, and S. Ahmad, "Vision-based robust lane detection and tracking in challenging conditions," IEEE Access, vol. 11, pp. 67938-67955, 2023. [44] H. Sak, A. Senior, and F. Beaufays, “Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition,” arXiv preprint arXiv:1402.1128, 2014. [45] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 249–256, 2010. [46] TuSimple, “Tusimple benchmark,” https://github.com/TuSimple/tusimple-benchmark, accessed November, 2019. [47] Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang, "Spatial As Deep: Spatial CNN for Traffic Scene Understanding, " AAAI Conference on Artificial Intelligence (AAAI), February, 2018. [48] M. Almehdhar et al., "Deep Learning in the Fast Lane: A Survey on Advanced Intrusion Detection Systems for Intelligent Vehicle Networks," IEEE Open Journal of Vehicular Technology, 2024. | ||
|
آمار تعداد مشاهده مقاله: 262 تعداد دریافت فایل اصل مقاله: 17 |
||