This project explores a vision-based approach for Turtlebot4 to dynamically dodge obstacles while navigating toward a designated destination. Unlike previous works that rely on expensive sensors like LiDAR or event cameras, this research focuses on using an affordable stereo camera, the OAK-D Pro, for real-time obstacle detection and avoidance. The implementation includes stereo vision processing, RRT* path planning, Bézier curve smoothing, and a unicycle-based control model.
- Obstacle Detection: Uses OAK-D Pro camera to detect obstacles and determine their coordinates.
- Path Planning: Implements RRT* (Rapidly-exploring Random Trees) to plan collision-free paths.
- Path Smoothing: Uses Bézier curves to create a smooth trajectory for the Turtlebot4.
- Motion Control: Applies a unicycle model for velocity and angular velocity calculations.
- ROS2 Integration: Built using ROS2, allowing modular and scalable implementation.
- Hardware:
- Turtlebot4
- OAK-D Pro Camera
- Raspberry Pi 4B (or compatible computational unit)
- Software:
- ROS2 (Robot Operating System 2)
- OpenCV
- Luxonis DepthAI SDK
- Python 3
$ git clone https://github.com/waynechu1109/Dodging-Dynamical-Obstacles-Using-Turtlebot4-Camera-Feed.git
$ cd Dodging-Dynamical-Obstacles-Using-Turtlebot4-Camera-Feed
Ensure that ROS2 is installed and sourced before running the system:
$ source /opt/ros/humble/setup.bash
$ ros2 launch turtlebot4_camera camera.launch.py
$ ros2 run obstacle_detection detection_node
$ ros2 run path_planning planner_node
$ ros2 run robot_controller movement_node
- The OAK-D Pro camera provides depth information to estimate obstacle positions.
- ROS2 topics
/camera_data
and/obstacle_info
handle obstacle tracking.
- Converts obstacle coordinates from the camera frame to the robot's odometry frame using transformation matrices.
- Generates a feasible path avoiding obstacles detected in real-time.
- Smooths the RRT* path to create a natural, drivable trajectory.
- Implements a unicycle-based control model to compute velocity and angular velocity.
The robot successfully detects moving obstacles, recalculates paths, and dodges them dynamically while progressing toward its destination.
- Self-Driving Cars: Potential to reduce costs by replacing LiDAR-based obstacle avoidance with camera-based systems.
- Unmanned Aerial Vehicles (UAVs): Enhancing drone navigation safety by incorporating vision-based obstacle avoidance.
Developed by Wei-Teng Chu under the mentorship of Neilabh Banzal, Parth Paritosh, Scott Addams, and Prof. Jorge Cortés at the Multi-Agent Robotics (MURO) Lab, UC San Diego.
For inquiries, feel free to reach out via GitHub Issues or email.