Skip to content

Commit

Permalink
Jetson onnxruntime (blakeblackshear#14688)
Browse files Browse the repository at this point in the history
* Add support for using onnx runtime with jetson

* Update docs

* Clarify
  • Loading branch information
NickM-27 authored Oct 30, 2024
1 parent 03dd9b2 commit c7a4220
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 6 deletions.
7 changes: 4 additions & 3 deletions docker/tensorrt/Dockerfile.arm64
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ ARG DEBIAN_FRONTEND
# Use a separate container to build wheels to prevent build dependencies in final image
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
python3.9 python3.9-dev \
wget build-essential cmake git \
python3.9 python3.9-dev \
wget build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*

# Ensure python3 defaults to python3.9
Expand Down Expand Up @@ -41,7 +41,8 @@ RUN --mount=type=bind,source=docker/tensorrt/detector/build_python_tensorrt.sh,t
&& TENSORRT_VER=$(cat /etc/TENSORRT_VER) /deps/build_python_tensorrt.sh

COPY docker/tensorrt/requirements-arm64.txt /requirements-tensorrt.txt
RUN pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
RUN pip3 uninstall -y onnxruntime \
&& pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt

FROM build-wheels AS trt-model-wheels
ARG DEBIAN_FRONTEND
Expand Down
1 change: 1 addition & 0 deletions docker/tensorrt/requirements-arm64.txt
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
cuda-python == 11.7; platform_machine == 'aarch64'
onnxruntime @ https://nvidia.box.com/shared/static/9aemm4grzbbkfaesg5l7fplgjtmswhj8.whl; platform_machine == 'aarch64'
6 changes: 3 additions & 3 deletions docs/docs/configuration/object_detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ Frigate supports multiple different detectors that work on different types of ha
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.

**Nvidia**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` Frigate image when a supported ONNX model is configured.
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.

**Rockchip**
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.

**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.

:::

Expand Down
1 change: 1 addition & 0 deletions docs/docs/configuration/semantic_search.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ If the correct build is used for your GPU and the `large` model is configured, t

**Nvidia**
- Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used as a detector in the `-tensorrt-jp(4/5)` Frigate image.

:::

Expand Down

0 comments on commit c7a4220

Please sign in to comment.