Authors: Hattar Hattar, Zarqa UniversityAbu-Jassar Amer, Department of Computer Science; College of Information Technology Amman Arab University Amman Hafez Mohamed, INTI-IU-University;Shinawatra University Al-Sharo Yaser, Ajluon National University Yevsieiev Vladyslav, Automation and Robotics Kharkiv National University of Radio Electronics Lyashenko Vyacheslav, Kharkiv National University of Radio Electronics
This study examines a deep learning model for spatial localization and identification of objects within the collaborative robot workspace through the integration of computer vision, such as in public health and crowed workplaces. A method utilizing the YOLOv8 paradigm is proposed, incorporating depth assessment for each identified object. The approach facilitates the representation of the spatial co-ordinates of objects in the format (X, Y, Z). The simulation outcomes illustrate the efficacy of neuronal identification and dynamic localization under various experimental environmental situations, and show the following results: average depth reconstruction error (MSE = 0.018–0.026 m²) and average frame processing time (≈ 18–22 ms), confirming real-time operation. The generated graphs evaluate the algorithm's stability and its suitability for implementation in adaptive control systems by showing the variation in object quantity over time and their spatial distribution. The proposed implementation utilizes Python within the PyCharm environment, ensuring the flexibility and scalability of the analyzed systems.
Keywords: computer vision, collaborative robot, deep learning, Industry 5.0., object identification, spatial localization, YOLOv8
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE