← Back to Presentations
In-person

Model of Spatial Localization and Identification Objects in the Working Area Collaborative Robot.

Speakers: Hattar Hattar

Track: Track 7: Pattern Recognition, Computer Vision and Image Processing

📑 No Slides 🎬 No Video

Abstract

This study examines a deep learning model for spatial localization and identification of objects within the collaborative robot workspace through the integration of computer vision, such as in public health and crowed workplaces. A method utilizing the YOLOv8 paradigm is proposed, incorporating depth assessment for each identified object. The approach facilitates the representation of the spatial co-ordinates of objects in the format (X, Y, Z). The simulation outcomes illustrate the efficacy of neuronal identification and dynamic localization under various experimental environmental situations, and show the following results: average depth reconstruction error (MSE = 0.018–0.026 m²) and average frame processing time (≈ 18–22 ms), confirming real-time operation. The generated graphs evaluate the algorithm's stability and its suitability for implementation in adaptive control systems by showing the variation in object quantity over time and their spatial distribution. The proposed implementation utilizes Python within the PyCharm environment, ensuring the flexibility and scalability of the analyzed systems.

Speakers

Hattar Hattar
Associate Professor
Zarqa University

Details

Type
In-person
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
189
Likes
8