This guide will walk you through setting up the SuperSLAM project locally on your machine without using Docker. This setup mirrors the environment defined in the Dockerfile and is suitable for development and testing.
- Ubuntu 22.04: The setup is tested on Ubuntu 22.04. Other distributions may require adjustments.
- NVIDIA GPU: Ensure you have an NVIDIA GPU with CUDA 11.8 support.
- NVIDIA Drivers: Install the latest NVIDIA drivers compatible with CUDA 11.8.
- CUDA Toolkit 11.8: Install CUDA 11.8 from the NVIDIA website.
- cuDNN 8.9.7: Install cuDNN compatible with CUDA 11.8 from the NVIDIA website.
Update your system and install the required general dependencies:
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install -y \
build-essential \
cmake \
ninja-build \
git \
pkg-config \
gfortran \
python3-dev \
python3-pip \
libglew-dev \
libboost-all-dev \
libssl-dev \
wget \
unzip \
curl
Install dependencies for OpenCV:
sudo apt-get install -y \
libgtk-3-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libxvidcore-dev \
libx264-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
openexr \
libatlas-base-dev \
libopencv-dev \
python3-opencv
Install libyaml-cpp
for YAML file parsing:
sudo apt-get install -y libyaml-cpp-dev
Install Eigen3, a C++ template library for linear algebra:
sudo apt-get install -y libeigen3-dev
Install TensorRT and cuDNN for CUDA 11.8:
sudo apt-get update && sudo apt-get install -y \
libcudnn8=8.9.7.29-1+cuda11.8 \
libcudnn8-dev=8.9.7.29-1+cuda11.8 \
libnvinfer8=8.5.3-1+cuda11.8 \
libnvinfer-plugin8=8.5.3-1+cuda11.8 \
libnvinfer-plugin-dev=8.5.3-1+cuda11.8 \
libnvinfer-bin=8.5.3-1+cuda11.8 \
libnvinfer-dev=8.5.3-1+cuda11.8 \
libnvinfer-samples=8.5.3-1+cuda11.8 \
libnvonnxparsers8=8.5.3-1+cuda11.8 \
libnvonnxparsers-dev=8.5.3-1+cuda11.8 \
libnvparsers8=8.5.3-1+cuda11.8 \
libnvparsers-dev=8.5.3-1+cuda11.8 \
tensorrt=8.5.3.1-1+cuda11.8 \
tensorrt-dev=8.5.3.1-1+cuda11.8 \
tensorrt-libs=8.5.3.1-1+cuda11.8
Follow these steps to install ROS 2 Humble:
- Add the ROS 2 repository:
sudo apt-get install -y software-properties-common
sudo add-apt-repository universe
sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null
- Install ROS 2 Humble Desktop:
sudo apt-get update && sudo apt-get install -y \
ros-humble-desktop \
python3-colcon-common-extensions
- Source the ROS 2 environment in your
.bashrc
:
echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
source ~/.bashrc
- Clone the SuperSLAM repository:
git clone https://github.com/your-repo/SuperSLAM.git --recursive
cd SuperSLAM
- Download and install
libtorch
with CUDA support:
mkdir -p thirdparty
wget -q https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.1.0%2Bcu118.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.1.0+cu118.zip -d thirdparty
rm libtorch-cxx11-abi-shared-with-deps-2.1.0+cu118.zip
- Build third-party dependencies:
chmod +x ./utils/build_deps.sh
./utils/build_deps.sh
Add the following to your .bashrc
to set the LD_LIBRARY_PATH
:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/SuperSLAM/thirdparty/libtorch/lib:$LD_LIBRARY_PATH
Reload your .bashrc
:
source ~/.bashrc
- Verify CUDA installation:
nvcc --version
- Verify TensorRT installation:
dpkg -l | grep tensorrt
- Verify ROS 2 installation:
ros2 --version
- Verify
libtorch
installation:
Ensure the libtorch
directory exists in thirdparty/
.
- Build the project:
sh build.sh
The converted model is already provided in the weights folder, if you are using the pretrained model officially provided by SuperPoint and SuperGlue, you do not need to go through this step.
The default image size param is 320x240, if you need to modify the image size in the utils/config.yaml
file, you should delete the old .engine
file in the weights dir.
python convert2onnx/convert_superpoint_to_onnx.py --weight_file superpoint_pth_file_path --output_dir superpoint_onnx_file_dir
python convert2onnx/convert_superglue_to_onnx.py --weight_file superglue_pth_file_path --output_dir superglue_onnx_file_dir