Implementing YOLOv10 object detection using OpenVINO for efficient and accurate real-time inference in C++.
- Support for
ONNX
andOpenVINO IR
model formats - Support for
FP32
,FP16
andINT8
precisions - Support for loading model with dynamic shape
Tested on Ubuntu 18.04
, 20.04
, 22.04
.
Dependency | Version |
---|---|
OpenVINO | >=2023.3 |
OpenCV | >=3.2.0 |
C++ | >=14 |
CMake | >=3.10.2 |
You have two options for setting up the environment: manually installing dependencies or using Docker.
Manual Installation
apt-get update
apt-get install -y \
libtbb2 \
cmake \
make \
git \
libyaml-cpp-dev \
wget \
libopencv-dev \
pkg-config \
g++ \
gcc \
libc6-dev \
make \
build-essential \
sudo \
ocl-icd-libopencl1 \
python3 \
python3-venv \
python3-pip \
libpython3.8
You can download OpenVINO from here.
wget -O openvino.tgz https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu20_2023.3.0.13775.ceeafaf64f3_x86_64.tgz && \
sudo mkdir /opt/intel
sudo mv openvino.tgz /opt/intel/
cd /opt/intel
sudo tar -xvf openvino.tgz
sudo rm openvino.tgz
sudo mv l_openvino* openvino
Using Docker
To build the Docker image yourself, use the following command:
docker build . -t yolov10
Alternatively, you can pull the pre-built Docker image from Docker Hub (available for Ubuntu 18.04, 20.04, and 22.04):
docker pull rlggyp/yolov10:18.04
docker pull rlggyp/yolov10:20.04
docker pull rlggyp/yolov10:22.04
For detailed usage information, please visit the Docker Hub repository page.
Grant the Docker container access to the X server by running the following command:
xhost +local:docker
To run a container from the image, use the following docker run
command:
docker run -it --rm --mount type=bind,src=$(pwd),dst=/repo \
--env DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /dev:/dev \
-w /repo \
rlggyp/yolov10:<tag>
git clone https://github.com/rlggyp/YOLOv10-OpenVINO-CPP-Inference.git
cd YOLOv10-OpenVINO-CPP-Inference/src
mkdir build
cd build
cmake ..
make
You can download the YOLOv10 model from here: ONNX, OpenVINO IR FP32, OpenVINO IR FP16, OpenVINO IR INT8
# For video input:
./video <model_path.onnx> <video_path>
# For image input:
./detect <model_path.onnx> <image_path>
# For real-time inference with a camera:
./camera <model_path.onnx> <camera_index>
# For video input:
./video <model_path.xml> <video_path>
# For image input:
./detect <model_path.xml> <image_path>
# For real-time inference with a camera:
./camera <model_path.xml> <camera_index>
- How to export the YOLOv10 model
- Convert and Optimize YOLOv10 with OpenVINO
- Exporting the model into OpenVINO format
- Model Export with Ultralytics YOLO
- Supported models by OpenVINO
- YOLOv10 exporter notebooks
Contributions are welcome! If you have any suggestions, bug reports, or feature requests, please open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.