Clone the repository into ~/crazyflie_sim_shared
directory:
git clone https://github.com/mmcza/CrazyFlie-With-Depth-Image-Model/ ~/crazyflie_sim_shared
docker build . -t crazyflie_simulator
* (Information only if you're interested in using the package with Depth Estimation Model) for some reason a duplicate of the cv_bridge
package is not uninstalled when the image is being built and requires to run pip uninstall cv_bridge
after starting the container
To start the container simply run:
bash start_container.sh
To enter the container from another terminal you can use:
docker exec -ti crazyflie-sim bash
cd Shared/crazyflie_mapping_demo/ros2_ws/
colcon build --symlink-install && source install/setup.bash
ros2 launch crazyflie_ros2_multiranger_bringup simple_mapper_simulation.launch.py
In second terminal
ros2 run teleop_twist_keyboard teleop_twist_keyboard
Insert the model into depth_estimation_model
and name it depth-estimation_model.onnx
(or change path inside /crazyflie_mapping_demo/ros2_ws/depth-estimation/depth-estimation/depth-estimation_node.py
)
To run with GPU:
ros2 run depth_estimation depth_estimation
gz service -s /world/empty/set_pose --reqtype gz.msgs.Pose --reptype gz.msgs.Boolean --timeout 300 -r "name: 'crazyflie', position: {x: -1.0, y: -1.0, z: 1.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}"
Link to Pose message declaration
To run in the world_cafe_1.sdf
(adjust the num_of_files
to the desired number of pictures)
ros2 run crazyflie_data_collector data_collector --ros-args -p min_x:=-4.75 -p max_x:=4.0 -p min_y:=-10.5 -p max_y:=11.5 -p min_z:=0.1 -p max_z:=2.50 -p num_of_files:=10 -p output_path:="/root/Shared/crazyflie_images/"
For world_warehouse_1.sdf
ros2 run crazyflie_data_collector data_collector --ros-args -p min_x:=-6.25 -p max_x:=6.25 -p min_y:=-9.5 -p max_y:=9.5 -p min_z:=0.1 -p max_z:=8.50 -p num_of_files:=1000 -p output_path:="/root/Shared/neural_network_model/crazyflie_images/warehouse/"
python data_viewer.py
Install the required packages by running the following command:
pip install -r requirements.txt
Check the CUDA version of your GPU and install the appropriate version of PyTorch from the official website.
To start the training, run the following commands:
python train.py --model unet_resnet34
python train.py --model unet_cbam
The Depth Loss combines three key components:
- L1 Loss:
The L1 Loss measures the mean absolute error (MAE) between the predicted depth map ((\hat{D})) and the ground truth depth map ((D)).
- SSIM Loss:
The Structural Similarity Index (SSIM) Loss evaluates the perceptual similarity between the predicted and ground truth depth maps. The SSIM Loss is defined as:
Here, ( \text{SSIM}(\hat{D}, D) ) computes the structural similarity index over the depth maps.
- Smoothness Loss:
The final loss function combines these components using a weighted sum.