TY - GEN
T1 - Real-Time Object Detection and Classification for Enhanced Independent Navigation
AU - Hidayati, Qory
AU - Kusuma, Hendra
AU - Yusmar, Akmal
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Visually impaired individuals face significant challenges in navigating complex environments independently. This study proposes a lightweight, portable navigation aid system based on Raspberry Pi, integrating a camera, ultrasonic sensors, and the MobileNet deep learning framework for real-time object detection and classification. The system runs on the Raspbian OS and utilizes OpenCV and TensorFlow Lite to process visual data. A custom dataset of 600 images (person, chair, table) was used to fine-tune the MobileNet model, achieving a classification accuracy of 91.3%. The HC-SR05 ultrasonic sensor attained 99.06% accuracy in distance estimation, with an average deviation of 2.9 cm. The system delivers dual-mode validation (vision + distance) and provides audio feedback based on object type and estimated steps, calculated from the measured distance. It maintains a real-time processing speed of 18 frames per second with feedback latency between 0.4-0.7 seconds. Experimental tests confirm reliable performance across indoor conditions, demonstrating the system's potential as an effective, low-cost assistive technology. This research contributes a novel combination of deep learning and sensor fusion on an embedded platform, offering enhanced autonomy and spatial awareness for visually impaired users.
AB - Visually impaired individuals face significant challenges in navigating complex environments independently. This study proposes a lightweight, portable navigation aid system based on Raspberry Pi, integrating a camera, ultrasonic sensors, and the MobileNet deep learning framework for real-time object detection and classification. The system runs on the Raspbian OS and utilizes OpenCV and TensorFlow Lite to process visual data. A custom dataset of 600 images (person, chair, table) was used to fine-tune the MobileNet model, achieving a classification accuracy of 91.3%. The HC-SR05 ultrasonic sensor attained 99.06% accuracy in distance estimation, with an average deviation of 2.9 cm. The system delivers dual-mode validation (vision + distance) and provides audio feedback based on object type and estimated steps, calculated from the measured distance. It maintains a real-time processing speed of 18 frames per second with feedback latency between 0.4-0.7 seconds. Experimental tests confirm reliable performance across indoor conditions, demonstrating the system's potential as an effective, low-cost assistive technology. This research contributes a novel combination of deep learning and sensor fusion on an embedded platform, offering enhanced autonomy and spatial awareness for visually impaired users.
KW - Independent navigation
KW - MobileNet
KW - OpenCV
KW - Raspberry Pi
KW - object classification
KW - object detection
KW - ultrasonic sensors
UR - https://www.scopus.com/pages/publications/105025415645
U2 - 10.1109/AIMS66189.2025.11229487
DO - 10.1109/AIMS66189.2025.11229487
M3 - Conference contribution
AN - SCOPUS:105025415645
T3 - 2025 IEEE International Conference on Artificial Intelligence and Mechatronics Systems, AIMS 2025
BT - 2025 IEEE International Conference on Artificial Intelligence and Mechatronics Systems, AIMS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd IEEE International Conference on Artificial Intelligence and Mechatronics Systems, AIMS 2025
Y2 - 24 May 2025 through 25 May 2025
ER -