From 7a6984bc0e5527a9e384a0a98080fcbd55a9e1ad Mon Sep 17 00:00:00 2001 From: Randall Zhuo Date: Tue, 16 Apr 2024 15:42:54 +0800 Subject: [PATCH] Add FAQ Signed-off-by: Randall Zhuo --- FAQ.md | 156 +++++++++++++++++++++++++++++++++++++++++++++++++++ FAQ_CN.md | 154 ++++++++++++++++++++++++++++++++++++++++++++++++++ README.md | 2 +- README_CN.md | 2 +- 4 files changed, 312 insertions(+), 2 deletions(-) create mode 100644 FAQ.md create mode 100644 FAQ_CN.md diff --git a/FAQ.md b/FAQ.md new file mode 100644 index 0000000..77a79d2 --- /dev/null +++ b/FAQ.md @@ -0,0 +1,156 @@ +- [1. Common Issue](#1-common-issue) + - [1.1 How to upgrade RKNPU’s related dependent libraries](#11-how-to-upgrade-rknpus-related-dependent-libraries) + - [1.2 Result differs via platform](#12-result-differs-via-platform) + - [1.3 The demo result not correct](#13-the-demo-result-not-correct) + - [1.4 The demo ran correctly, but when I replaced it with my own model, it ran wrong.](#14-the-demo-ran-correctly-but-when-i-replaced-it-with-my-own-model-it-ran-wrong) + - [1.5 Model inference performance does not meet reference performance](#15-model-inference-performance-does-not-meet-reference-performance) + - [1.6 How to solve the accuracy problem after model quantization](#16-how-to-solve-the-accuracy-problem-after-model-quantization) + - [1.7 Is there a board-side python demo](#17-is-there-a-board-side-python-demo) + - [1.8 Why are there no demos for other models? Is it because they are not supported](#18-why-are-there-no-demos-for-other-models-is-it-because-they-are-not-supported) + - [1.9 Is there a LLM model demo?](#19-is-there-a-llm-model-demo) + - [1.10 Why RV1103 and RV1106 can run fewer demos](#110-why-rv1103-and-rv1106-can-run-fewer-demos) +- [2. RGA](#2-rga) +- [3. YOLO](#3-yolo) + - [3.1 Class confidence exceeds 1](#31-class-confidence-exceeds-1) + - [3.2 So many boxes in the inference results and fill the entire picture](#32-so-many-boxes-in-the-inference-results-and-fill-the-entire-picture) + - [3.3 The box's position and confidence are correct, but size not match well(yolov5, yolov7)](#33-the-boxs-position-and-confidence-are-correct-but-size-not-match-wellyolov5-yolov7) + - [3.4 MAP accuracy is lower than official results](#34-map-accuracy-is-lower-than-official-results) + - [3.5 Can NPU run YOLO without modifying the model structure?](#35-can-npu-run-yolo-without-modifying-the-model-structure) + + + +## 1. Common Issue + +### 1.1 How to upgrade RKNPU’s related dependent libraries + +| | RKNPU1 | RKNPU2 | +| ------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | +| Platform | RV1109
RV1126
RK1808
RK3399pro | RV1103
RV1106
RK3562
RK3566
RK3568
RK3588
RK3576 | +| Driver | Upgrade by replace .ko file | Upgrade by burning new firmware | +| Runtime | Refer to the [documentation](https://github.com/airockchip/rknpu/blob/master/README.md) and replace **librknn_runtime.so** and its related dependency files to upgrade.

(PC-to-board debugging function requires updating the **rknn_server** file described in the document) | Refer to [Document](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/rknn_server_proxy.md), replace the **librknnrt.so** file to upgrade
(RV1103/RV1106 use the cropped version runtime, the corresponding file name is **librknnmrt.so**)

(PC-to-board debugging function requires updating the **rknn_server** file described in the document) | +| RKNN-Toolkit | Refer to the [documentation](https://github.com/airockchip/rknn-toolkit/blob/master/README.md) to install the new python whl file for upgrade | Refer to the [documentation](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_EN.pdf) **Chapter 2.1** to install the new python whl file for upgrade | + +- Please note that due to differences in specifications of development boards, firmware is usually incompatible with each other. Please contact the source of purchase to obtain new firmware and burning methods. + + + +### 1.2 Result differs via platform + +Affected by the NPU generation, it is normal for the inference results to be slightly different. If the difference is significant, please submit an issue for feedback. + + + +### 1.3 The demo result not correct + +Please check whether the versions of the **driver**, runtime.so, and **RKNN-Toolkit** meet the version requirements listed in the [Document](README.md). + + + +### 1.4 The demo ran correctly, but when I replaced it with my own model, it ran wrong. + +Please check whether the requirements of the demo document are followed when exporting the model. For example, follow the [document](./examples/yolov5/README.md) and using https://github.com/airockchip/yolov5 to export onnx model. + + + +### 1.5 Model inference performance does not meet reference performance + +The following factors may cause differences in inference performance: + +- The inference performance of **python api** may be weaker, please test the inference performance based on **C api**. + +- The inference performance data of rknn model zoo does **not include pre-processing and post-processing**. It only counts the time-consuming of **rknn.run**, which is different from the time-consuming of the complete demo. The time-consuming of these other operations is related to usage scenarios and system resource occupancy. +- Whether the board has been **fixed frequency and reached the maximum frequency** set by [scaling_frequency.sh](./scaling_frequency.sh). Some firmware may limit the maximum frequency of the CPU/NPU/DDR, resulting in reduced inference performance. +- Whether there are other applications occupying **CPU/NPU and bandwidth resources**, which will cause the inference performance to be lower. +- For chips with both big and small core CPUs (currently RK3588, RK3576), please refer to [document](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_EN.pdf) Chapter 5.3.3, **bind the big CPU core for testing**. + + + +### 1.6 How to solve the accuracy problem after model quantization + +Please refer to the [userguide document](https://github.com/airockchip/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_CN.pdf) to confirm whether the quantization function is used correctly. + +If the **model structural** characteristics and **weight distribution** cause int8 quantization to lose accuracy, please consider using **hybrid quantization** or **QAT quantization**. + + + +### 1.7 Is there a board-side python demo + +Install **RKNN-Toolkit-lite** on the board end and use the python inference script corresponding to the demo to implement python inference on the board. For RKNPU1 devices, refer to [RKNN-Toolkit-lite](https://github.com/airockchip/rknn-toolkit/tree/master/rknn-toolkit-lite). For RKNPU2 devices, refer to [RKNN-Toolkit-lite2](https://github.com/airockchip/rknn-toolkit2/tree/master/rknn_toolkit_lite2). + +(Some examples currently lack python demo. In addition, it is recommended that users who care about performance use the C interface for deployment) + + + +### 1.8 Why are there no demos for other models? Is it because they are not supported + +Actually most of the models are supported. Limited by the development period, consideration for the needs of most developers, we have selected models with higher practicality as demo examples. If you have better model recommendations, please feel free to submit an issue or contact us. + + + +### 1.9 Is there a LLM model demo? + +Not provided now. Support for the transformer model is still being gradually optimized, and we also hope to provide large model demos for developers to refer to and use as soon as possible. + + + +### 1.10 Why RV1103 and RV1106 can run fewer demos + +Due to the memory size limit of **RV1103** and **RV1106**, the **memory** usage of many larger models exceeds the board limit, so corresponding demos are not provided now. + + + +## 2. RGA + +For RGA related issues, please refer to [RGA documentation](https://github.com/airockchip/librga/blob/main/docs/Rockchip_FAQ_RGA_EN.md). + + + + +## 3. YOLO + +### 3.1 Class confidence exceeds 1 + +The **post-processing code** of the YOLO should matches with the model structure, otherwise an exception will occur. The post-process used by the current YOLO demo requires that the **category confidence output** of the model is **generated by sigmoid op**. The sigmoid op adjusts the confidence of (-∞, ∞) to the confidence of (0, 1). The lack of this sigmoid op will cause the category confidence greater than 1, as shown below: + +![yolov5_without_sigmoid_out](asset/yolov5_without_sigmoid_out.png) + +When encountering such problems, please refer to the instructions of the demo document to export the model. + + + +### 3.2 So many boxes in the inference results and fill the entire picture + +![yolo_too_much_box](asset/yolo_too_much_box.png) + +As shown above, there are two possibilities: + +- Same as the 3.1 section. Missing sigmoid op at the tail of the model may cause this problem. +- **Box threshold** value set too small and the **NMS threshold** value set too large in demo. + + + +### 3.3 The box's position and confidence are correct, but size not match well(yolov5, yolov7) + +This problem is usually caused by **anchor mismatch**. When exporting the model, please pay attention to whether the printed anchor information is inconsistent with the default anchor configuration in the demo. + + + +### 3.4 MAP accuracy is lower than official results + +There are two main reasons: + +- The official map test uses **dynamic shape model**, while rknn model zoo uses a **fixed shape model** for simplicity and ease of use, and the map test results will be lower than the dynamic shape model. +- The RKNN model will also suffer some accuracy loss after **quantization** is turned on. + +- If the user tries to use the C interface to test the map results on the board, please note that the way to read the image will affect the test results. For example, the map results are different when testing based on **cv2** and **stbi**. + + + +### 3.5 Can NPU run YOLO without modifying the model structure? + +**Yes, but not recommended.** + +If the model structure changes, the post-processing code corresponding to python demo and Cdemo needs to be adjusted. + +In addition, the adjustment method of the model structure was based on accuracy and performance considerations. Maintaining the original model structure may lead to **poor quantification accuracy** and **worse inference performance**. + diff --git a/FAQ_CN.md b/FAQ_CN.md new file mode 100644 index 0000000..4ea5d53 --- /dev/null +++ b/FAQ_CN.md @@ -0,0 +1,154 @@ +- [1. 常见问题](#1-常见问题) + - [1.1 如何升级 RKNPU 的相关依赖库](#11-如何升级-rknpu-的相关依赖库) + - [1.2 不同平台推理结果不同](#12-不同平台推理结果不同) + - [1.3 demo推理结果不对](#13-demo推理结果不对) + - [1.4 demo跑对了,但换成自己的模型跑错了](#14-demo跑对了但换成自己的模型跑错了) + - [1.5 模型推理性能达不到参考性能](#15-模型推理性能达不到参考性能) + - [1.6 如何解决模型量化掉精度问题](#16-如何解决模型量化掉精度问题) + - [1.7 是否有板端 python demo](#17-是否有板端-python-demo) + - [1.8 为什么其他模型没有demo,是因为不支持吗](#18-为什么其他模型没有demo是因为不支持吗) + - [1.9 是否有大模型demo](#19-是否有大模型demo) + - [1.10 为什么RV1103、RV1106能跑的demo较少](#110-为什么rv1103rv1106能跑的demo较少) +- [2. RGA](#2-rga) +- [3. YOLO](#3-yolo) + - [3.1 类别置信度超过1](#31-类别置信度超过1) + - [3.2 推理结果出现框非常多,填满了整个图的情况](#32-推理结果出现框非常多填满了整个图的情况) + - [3.3 框的位置、置信度是对的,但是框和物体不贴合(yolov5, yolov7)](#33-框的位置置信度是对的但是框和物体不贴合yolov5-yolov7) + - [3.4 MAP精度相比官方的结果低一些](#34-map精度相比官方的结果低一些) + - [3.5 如果不修改YOLO模型结构,NPU可以跑吗](#35-如果不修改yolo模型结构npu可以跑吗) + + + +## 1. 常见问题 + +### 1.1 如何升级 RKNPU 的相关依赖库 + +| | RKNPU1 | RKNPU2 | +| ------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | +| 对应平台 | RV1109
RV1126
RK1808
RK3399pro | RV1103
RV1106
RK3562
RK3566
RK3568
RK3588
RK3576 | +| 驱动 | 通过更新.ko文件升级 | 通过更新固件升级 | +| runtime | 参考[文档](https://github.com/airockchip/rknpu/blob/master/README.md),替换 librknn_runtime.so 及其相关依赖文件升级
(如果需要使用python连板调试功能,请同步更新文档中涉及的 rknn_server 文件) | 参考[文档](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/rknn_server_proxy.md),替换 librknnrt.so 文件升级
(RV1103/RV1106 使用裁剪版本的 runtime,对应文件名称为 librknnmrt.so)
(如果需要使用 python 连板调试功能,请同步更新文档中涉及的 rknn_server 文件) | +| RKNN-Toolkit | 参考[文档](https://github.com/airockchip/rknn-toolkit/blob/master/README.md),安装新的 python whl 文件升级 | 参考[文档](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_CN.pdf) 2.1小节,安装新的 python whl 文件升级 | + +- 请注意,由于开发板的规格差异,固件通常是互不兼容的,请联系开发板的购买来源获取新固件及烧录方法。 + + + +### 1.2 不同平台推理结果不同 + +受NPU代次影响,推理结果有略微差异是正常现像,如果差异很大,请提issue反馈。 + + + +### 1.3 demo推理结果不对 + +请检查驱动、runtime.so、RKNN-Toolkit 的版本是否满足[文档](README_CN.md)罗列的版本要求。 + + + +### 1.4 demo跑对了,但换成自己的模型跑错了 + +请检查在导出模型时,是否有按照demo文档要求的进行。例如 yolov5 demo,要求按照[文档](./examples/yolov5/README.md)的要求基于 https://github.com/airockchip/yolov5 仓库导出 onnx 模型。 + + + +### 1.5 模型推理性能达不到参考性能 + +以下因素可能导致推理性能有差异: + +- python api 的推理性能会较弱一些,请基于 CAPI 测试性能。 + +- rknn model zoo 的推理性能数据不包含前后处理,只统计 rknn.run 的耗时,与完整demo的耗时是存在差异的。这些操作的耗时与使用场景、系统资源占用有关,这部分数据请用户基于实际环境进行测试。 +- 板子是否已经定频、并且达到了[定频脚本](./scaling_frequency.sh)设定的最高频率。部分固件可能限制了CPU/NPU/DDR的最高频率,导致推理性能有所减弱。 +- 是否有其他应用占用了 CPU/NPU 及带宽资源,这会导致推理性能减弱。 +- 对于有大小核CPU的芯片(目前为RK3588, RK3576),测试时请参考[文档](https://github.com/rockchip-linux/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_CN.pdf)的5.3.3小节,绑定CPU大核进行测试。 + + + +### 1.6 如何解决模型量化掉精度问题 + +根据平台版本,先参考 [RKNN-Toolkit1文档](https://github.com/airockchip/rknn-toolkit/blob/master/doc/Rockchip_User_Guide_RKNN_Toolkit_V1.7.5_CN.pdf) 、 [RKNN-Toolkit2文档](https://github.com/airockchip/rknn-toolkit2/blob/master/doc/02_Rockchip_RKNPU_User_Guide_RKNN_SDK_V1.6.0_CN.pdf) 确认量化功能的使用是否正确。 + +如果是模型结构特性、权重分布导致 int8 量化掉精度,请考虑使用混合量化或QAT量化。 + + + +### 1.7 是否有板端 python demo + +在板端安装 RKNN-Toolkit-lite,使用对应 demo 的 python 推理脚本,将 `from rknn.api import RKNN` 修改为 `from rknnlite.api import RKNNLite as RKNN` 就可以实现板端的 python 推理。RKNPU1 平台请使用 [RKNN-Toolkit-lite](https://github.com/airockchip/rknn-toolkit/tree/master/rknn-toolkit-lite),RKNPU2 平台请使用 [RKNN-Toolkit-lite2](https://github.com/airockchip/rknn-toolkit2/tree/master/rknn_toolkit_lite2)。 + +(部份示例目前暂缺 python demo,此外更推荐在意性能的用户使用C接口进行部署) + + + +### 1.8 为什么其他模型没有demo,是因为不支持吗 + +不是。受限于开发周期,为了照顾大部分开发者的需求,我们挑选了实用性较高的模型作为demo示例。如果有更好的模型推荐,欢迎提issue或其他渠道联系 RKNPU 部门开发者。 + + + +### 1.9 是否有大模型demo + +目前没有。对 transformer 模型的支持目前还在逐步优化中,我们也希望能尽早提供大模型demo给开发者参考、使用。 + + + +### 1.10 为什么RV1103、RV1106能跑的demo较少 + +受限于RV1103、RV1106的内存大小限制,很多模型的内存占用较大,超出板端内存限制,故暂不提供对应的demo 。 + + + +## 2. RGA + +RGA相关问题请参考[RGA文档](https://github.com/airockchip/librga/blob/main/docs/Rockchip_FAQ_RGA_CN.md)。 + + + +## 3. YOLO + +### 3.1 类别置信度超过1 + +YOLO模型的后处理代码和模型导出结构需要匹配,否则会出现异常。当前YOLO demo使用的后处理,要求模型的类别置信度输出的由 sigmoid op 产生,sigmoid op将 (-∞, ∞) 的置信度调整为 (0, 1) 的置信度,缺乏该 sigmoid op 会导致类别置信度大于 1,示意图如下: + +![yolov5_without_sigmoid_out](asset/yolov5_without_sigmoid_out.png) + +碰到此类问题时,请参考对应demo文档说明的方式进行模型导出。 + + + +### 3.2 推理结果出现框非常多,填满了整个图的情况 + +![yolo_too_much_box](asset/yolo_too_much_box.png) + +如上图所示,出现这种问题有两种可能: + +- 第一种情况同3.1小节,模型尾部缺失 sigmoid op可能会导致此问题。 +- 第二种是demo配置的 box threashold 数值太小、nms threshold 数值太大导致。 + + + +### 3.3 框的位置、置信度是对的,但是框和物体不贴合(yolov5, yolov7) + +这种情况通常是 anchor 不匹配导致的。请在导出模型时,留意打印的 anchor 信息是否与 demo 中默认的 anchor 配置不一致。 + + + +### 3.4 MAP精度相比官方的结果低一些 + +主要有两个原因: + +- 官方测map时,使用动态shape模型;而rknn model zoo为了简单易用性,使用了固定shape模型,map测试结果会比动态shape的低一些。 +- RKNN 模型在开启量化后,也会有一部分精度损失。 +- 如果用户尝试在板端使用 C接口测试map结果,请注意读取图片的方式会影响测试结果,例如基于 cv2 和 stbi 测试map结果时不同。 + + + +### 3.5 如果不修改YOLO模型结构,NPU可以跑吗 + +可以,但不推荐。 + +如果不修改模型结构,python demo, Cdemo 对应的后处理代码需要自行调整。 + +此外模型结构的调整方式是经过了精度、性能考量的。保持原模型结构,可能会面临量化精度不佳、推理性能更差的问题。 + diff --git a/README.md b/README.md index 2f70051..38b60f3 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ In addition to exporting the model from the corresponding respository, the model | --------------------------------------------------------- | -------------------------- | ------------- | ------------------------------------------------------------ | | [mobilenet](./examples/mobilenet/README.md) | Classification | FP16/INT8 | [mobilenetv2-12.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/MobileNet/mobilenetv2-12.onnx) | | [resnet](./examples/resnet/README.md) | Classification | FP16/INT8 | [resnet50-v2-7.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/ResNet/resnet50-v2-7.onnx) | -| [yolov5](./examples/yolov5/README.md) | Object detection | FP16/INT8 | [yolov5n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5s_relu.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s_relu.onnx)
[yolov5s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5m.onnx) | +| [yolov5](./examples/yolov5/README.md) | Object detection | FP16/INT8 | [yolov5n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5s_relu.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s_relu.onnx)
[yolov5s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s.onnx)
[yolov5m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5m.onnx) | | [yolov6](./examples/yolov6/README.md) | Object detection | FP16/INT8 | [yolov6n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6n.onnx)
[yolov6s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6s.onnx)
[yolov6m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6m.onnx) | | [yolov7](./examples/yolov7/README.md) | Object detection | FP16/INT8 | [yolov7-tiny.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov7/yolov7-tiny.onnx)
[yolov7.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov7/yolov7.onnx) | | [yolov8](./examples/yolov8/README.md) | Object detection | FP16/INT8 | [yolov8n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8n.onnx)
[yolov8s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8s.onnx)
[yolov8m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8m.onnx) | diff --git a/README_CN.md b/README_CN.md index 32416e5..7aa3f05 100644 --- a/README_CN.md +++ b/README_CN.md @@ -30,7 +30,7 @@ RKNN Model Zoo依赖 RKNN-Toolkit2 进行模型转换, 编译安卓demo时需要 | --------------------------------------------------------- | ---------- | ------------ | ------------------------------------------------------------ | | [mobilenet](./examples/mobilenet/README.md) | 图像分类 | FP16/INT8 | [mobilenetv2-12.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/MobileNet/mobilenetv2-12.onnx) | | [resnet](./examples/resnet/README.md) | 图像分类 | FP16/INT8 | [resnet50-v2-7.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/ResNet/resnet50-v2-7.onnx) | -| [yolov5](./examples/yolov5/README.md) | 目标检测 | FP16/INT8 | [yolov5n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5s_relu.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s_relu.onnx)
[yolov5s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5m.onnx) | +| [yolov5](./examples/yolov5/README.md) | 目标检测 | FP16/INT8 | [yolov5n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5n.onnx)
[yolov5s_relu.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s_relu.onnx)
[yolov5s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5s.onnx)
[yolov5m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov5/yolov5m.onnx) | | [yolov6](./examples/yolov6/README.md) | 目标检测 | FP16/INT8 | [yolov6n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6n.onnx)
[yolov6s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6s.onnx)
[yolov6m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov6/yolov6m.onnx) | | [yolov7](./examples/yolov7/README.md) | 目标检测 | FP16/INT8 | [yolov7-tiny.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov7/yolov7-tiny.onnx)
[yolov7.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov7/yolov7.onnx) | | [yolov8](./examples/yolov8/README.md) | 目标检测 | FP16/INT8 | [yolov8n.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8n.onnx)
[yolov8s.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8s.onnx)
[yolov8m.onnx](https://ftrg.zbox.filez.com/v2/delivery/data/95f00b0fc900458ba134f8b180b3f7a1/examples/yolov8/yolov8m.onnx) |