This package attempts to implement a simple object tracking robot. We make use of turtlebot3's crappy laggy camera sensor to receive robot's view, the YOLO object detection/classification module to find the target and a plain controller to follow our target.
You control the target using keyboard. Note that the target itself (that weird stiff faceless bipedal Slender Man) is a turtlebot3_burger and you can control it via the turtlebot3_teleop package.
This node is responsible for receiving robot's camera view, perform detections and return the requested target's position in the image along with the image resolution. This node must create a YOLO model to do the task of object detection/classification. Feel free to setup your model based on any of the pretrained weights provided in the "yolo" directory or any custom one of your own. You need to complete this section.
This node must implement a simple P-controller (you can set up a full PID-controller, but you're likely to regret it, don't play hero). Using the data provided by the image processor's service, this node must guide the robot to its target. You need to complete this section.
- Install the requirements.
- Clone this package beside your
turtlebot3
packages. - Navigate to the root directory of your
catkin
workspace. - Do
catkin_make
.
- Navigate to the root directory of your
catkin
workspace. - Source your workspace:
. devel/setup.bash
. - Launch the provided launch file:
roslaunch turtlebot3_object_tracker turtlebot3_object_tracker.launch
.
Feel free to test your system in any of the world files provided by turtlebot3_simulation, the one provided in this package or any custom world file you feel cosy in.
The robot must be able to detect the target when viewed and start following it. When the target is not in sight, you can assign any desired behaviour to your robot.