Skip to content

Commit

Permalink
[DOCS] OV Benchmark update v3
Browse files Browse the repository at this point in the history
  • Loading branch information
akopytko committed Feb 4, 2025
1 parent ca454c8 commit 3e9166a
Show file tree
Hide file tree
Showing 4 changed files with 288 additions and 567 deletions.
2 changes: 1 addition & 1 deletion docs/articles_en/about-openvino/performance-benchmarks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ For a listing of all platforms and configurations used for testing, refer to the
**Disclaimers**

* Intel® Distribution of OpenVINO™ toolkit performance results are based on release
2025.0, as of February 05, 2025.
2025.0 as of January 28, 2025.

* OpenVINO Model Server performance results are based on release
2024.5, as of November 20, 2024.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ the table for more information.
- -0.09%
- -0.02%
- -0.04%

.. list-table:: Model Accuracy for BF16, FP32 and FP16 (FP16: Arc only. BF16: Xeon® 6972P only)
:header-rows: 1

Expand Down Expand Up @@ -125,7 +126,8 @@ the table for more information.
- 0.01%
-
- -0.03%
.. list-table:: Model Accuracy for AMX-FP16, AMX-INT4, Arc-FP16 and Arc-INT4 (Arc™ A-series)

.. list-table:: Model Accuracy for AMX-FP16, AMX-INT4, Arc-FP16 and Arc-INT4 (Arc™ B-series)
:header-rows: 1

* - OpenVINO™ Model name
Expand All @@ -135,6 +137,13 @@ the table for more information.
- B, AMX-INT4
- C, Arc-FP16
- D, Arc-INT4
* - GLM4-9B-Chat
- Data Default WWB
- Similarity
- 6.9%
- 3.8%
- 6.3%
- 15.1%
* - Qwen-2.5-7B-instruct
- Data Default WWB
- Similarity
Expand Down Expand Up @@ -209,6 +218,7 @@ the table for more information.
Notes: For all accuracy metrics a "-", (minus sign), indicates an accuracy drop.
The Similarity metric is the distance from "perfect" and as such always positive.
Similarity is cosine similarity - the dot product of two vectors divided by the product of their lengths.

.. raw:: html

<link rel="stylesheet" type="text/css" href="../../_static/css/benchmark-banner.css">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,36 +63,59 @@ Performance Information F.A.Q.
- Meta AI
- Auto regressive language
- 8K
* - `Llama-3.2-3B <https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct>`__
* - `Llama-3.2-3B-Instruct <https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct>`__
- Meta AI
- Auto regressive language
- 128K
* - `Mistral-7b-V0.1 <https://huggingface.co/mistralai/Mistral-7B-v0.1>`__
* - `Mistral-7b-Instruct-V0.2 <https://huggingface.co/mistralai/Mistral-7B-v0.2>`__
- Mistral AI
- Auto regressive language
- 4096
* - `Phi3-4k-mini <https://huggingface.co/microsoft/Phi-3-mini-4k-instruct>`__
- 32K
* - `Phi3-4k-mini-Instruct <https://huggingface.co/microsoft/Phi-3-mini-4k-instruct>`__
- Huggingface
- Auto regressive language
- 4096
* - `Qwen-2-7B <https://huggingface.co/Qwen/Qwen2-7B>`__
- Huggingface
- Auto regressive language
- 128K
* - `Qwen-2.5-7B-Instruct <https://huggingface.co/Qwen/Qwen2.5-7B-Instruct>`__
- Huggingface
- Auto regressive language
- 128K
* - `Stable-Diffusion-V1-5 <https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5>`__
- Hugginface
- Latent Diffusion Model
- 77
* - `FLUX.1-schnell <https://huggingface.co/black-forest-labs/FLUX.1-schnell>`__
- Hugginface
- Latent Adversarial Diffusion Distillation Model
- 256
* - `bert-base-cased <https://github.com/PaddlePaddle/PaddleNLP/tree/v2.1.1>`__
- BERT
- question / answer
- 128
* - `mask_rcnn_resnet50_atrous_coco <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mask_rcnn_resnet50_atrous_coco>`__
- Mask R-CNN ResNet 50 Atrous
- object instance segmentation
- 800x1365
* - `mobilenet-v2 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/mobilenet-v2-pytorch>`__
- Mobilenet V2 PyTorch
- classification
- 224x224
* - `resnet-50 <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf>`__
- ResNet-50_v1_ILSVRC-2012
- classification
- 224x224
* - `ssd-resnet34-1200-onnx <https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssd-resnet34-1200-onnx>`__
- ssd-resnet34 onnx model
- object detection
- 1200x1200
* - `yolov8n <https://github.com/ultralytics/ultralytics>`__
- Yolov8nano
- object detection
- 608x608


.. dropdown:: Where can I purchase the specific hardware used in the benchmarking?

Intel partners with vendors all over the world. For a list of Hardware Manufacturers, see the
Expand Down
Loading

0 comments on commit 3e9166a

Please sign in to comment.