Skip to content

Commit

Permalink
[DOCs] Fixing references for 2025.0 (#28765)
Browse files Browse the repository at this point in the history
### Details:
 - *item1*
 - *...*

### Tickets:
 - *ticket-id*

---------

Co-authored-by: sgolebiewski-intel <sebastianx.golebiewski@intel.com>
  • Loading branch information
tsavina and sgolebiewski-intel authored Jan 31, 2025
1 parent de91371 commit 92cb2d8
Show file tree
Hide file tree
Showing 25 changed files with 433 additions and 433 deletions.
2 changes: 1 addition & 1 deletion docs/articles_en/about-openvino/release-notes-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Deprecation And Support
Using deprecated features and components is not advised. They are available to enable a smooth
transition to new solutions and will be discontinued in the future. To keep using discontinued
features, you will have to revert to the last LTS OpenVINO version supporting them.
For more details, refer to the `OpenVINO Legacy Features and Components <https://docs.openvino.ai/2024/documentation/legacy-features.html>__`
For more details, refer to the `OpenVINO Legacy Features and Components <https://docs.openvino.ai/2025/documentation/legacy-features.html>__`
page.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ You need a model that is specific for your inference task. You can get it from o
Convert the Model
--------------------

If Your model requires conversion, check the `article <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/get-started-demos.html>`__ for information how to do it.
If Your model requires conversion, check the :doc:`article <../../../openvino-workflow/model-preparation>` for information how to do it.

.. _download-media:

Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/convert-to-openvino-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ OpenVINO IR format


OpenVINO `Intermediate Representation
(IR) <https://docs.openvino.ai/2024/documentation/openvino-ir-format.html>`__
(IR) <https://docs.openvino.ai/2025/documentation/openvino-ir-format.html>`__
is the proprietary model format of OpenVINO. It is produced after
converting a model with model conversion API. Model conversion API
translates the frequently used deep learning operations to their
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -941,7 +941,7 @@ advance and fill it in as the inference requests are executed.
Let’s compare the models and plot the results.
**Note**: To get a more accurate benchmark, use the `Benchmark Python
Tool <https://docs.openvino.ai/2024/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__
Tool <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__
.. code:: ipython3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -623,7 +623,7 @@ Compare Performance of the FP32 IR Model and Quantized Models

To measure the inference performance of the ``FP32`` and ``INT8``
models, we use `Benchmark
Tool <https://docs.openvino.ai/2024/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__
Tool <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__
- OpenVINO’s inference performance measurement tool. Benchmark tool is a
command line application, part of OpenVINO development tools, that can
be run in the notebook with ``! benchmark_app`` or
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/ddcolor-image-colorization-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ Compare inference time of the FP16 and INT8 models

To measure the inference performance of OpenVINO FP16 and INT8 models,
use `Benchmark
Tool <https://docs.openvino.ai/2024/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.
Tool <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.

**NOTE**: For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/depth-anything-v2-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -977,7 +977,7 @@ Compare inference time of the FP16 and INT8 models

To measure the inference performance of OpenVINO FP16 and INT8 models,
use `Benchmark
Tool <https://docs.openvino.ai/2024/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.
Tool <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.

**NOTE**: For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/depth-anything-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -940,7 +940,7 @@ Compare inference time of the FP16 and INT8 models

To measure the inference performance of OpenVINO FP16 and INT8 models,
use `Benchmark
Tool <https://docs.openvino.ai/2024/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.
Tool <https://docs.openvino.ai/2025/get-started/learn-openvino/openvino-samples/benchmark-tool.html>`__.

**NOTE**: For the most accurate performance estimation, it is
recommended to run ``benchmark_app`` in a terminal/command prompt
Expand Down
Loading

0 comments on commit 92cb2d8

Please sign in to comment.