Skip to content

Commit

Permalink
Update references
Browse files Browse the repository at this point in the history
  • Loading branch information
sgolebiewski-intel authored and kblaszczak-intel committed Jan 29, 2025
1 parent 00b9481 commit 991260a
Show file tree
Hide file tree
Showing 28 changed files with 74 additions and 74 deletions.
2 changes: 1 addition & 1 deletion docs/articles_en/about-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ OpenVINO Ecosystem
Along with the primary components of model optimization and runtime, the toolkit also includes:

* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop.
* :doc:`Openvino Notebooks <learn-openvino/interactive-tutorials-python>`- Jupyter Python notebook, which demonstrate key features of the toolkit.
* :doc:`Openvino Notebooks <get-started/learn-openvino/interactive-tutorials-python>`- Jupyter Python notebook, which demonstrate key features of the toolkit.
* `OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ - a server that enables scalability via a serving microservice.
* :doc:`OpenVINO Training Extensions <documentation/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
* :doc:`Dataset Management Framework (Datumaro) <documentation/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.
Expand Down
2 changes: 1 addition & 1 deletion docs/articles_en/documentation/openvino-extensibility.rst
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,6 @@ See Also
########

* :doc:`OpenVINO Transformations <openvino-extensibility/transformation-api>`
* :doc:`Using OpenVINO Runtime Samples <../learn-openvino/openvino-samples>`
* :doc:`Using OpenVINO Runtime Samples <../get-started/learn-openvino/openvino-samples>`
* :doc:`Hello Shape Infer SSD sample <../get-started/learn-openvino/openvino-samples/hello-reshape-ssd>`

2 changes: 1 addition & 1 deletion docs/articles_en/documentation/openvino-security.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,4 @@ Additional Resources
- Intel® Distribution of OpenVINO™ toolkit `home page <https://software.intel.com/en-us/openvino-toolkit>`__.
- :doc:`Convert a Model <../openvino-workflow/model-preparation/convert-model-to-ir>`.
- :doc:`OpenVINO™ Runtime User Guide <../openvino-workflow/running-inference>`.
- For more information on Sample Applications, see the :doc:`OpenVINO Samples Overview <../learn-openvino/openvino-samples>`
- For more information on Sample Applications, see the :doc:`OpenVINO Samples Overview <../get-started/learn-openvino/openvino-samples>`
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ need to install additional components. Check the
to see if your case needs any of them.

With the APT distribution, you can build OpenVINO sample files, as explained in the
:doc:`guide for OpenVINO sample applications <../../../learn-openvino/openvino-samples>`.
:doc:`guide for OpenVINO sample applications <../../../get-started/learn-openvino/openvino-samples>`.
For C++ and C, just run the ``build_samples.sh`` script:

.. tab-set::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ Learn more about how to integrate a model in OpenVINO applications by trying out
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400

Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:

* `OpenVINO Python API Tutorial <../../notebooks/openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <../../notebooks/hello-world-with-output.html>`__
Expand All @@ -240,7 +240,7 @@ Learn more about how to integrate a model in OpenVINO applications by trying out
.. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg
:width: 400

Visit the :doc:`Samples <../../../learn-openvino/openvino-samples>` page for other C++ example applications to get you started with OpenVINO, such as:
Visit the :doc:`Samples <../../../get-started/learn-openvino/openvino-samples>` page for other C++ example applications to get you started with OpenVINO, such as:

* :doc:`Basic object detection with the Hello Reshape SSD C++ sample <../../../get-started/learn-openvino/openvino-samples/hello-reshape-ssd>`
* :doc:`Object classification sample <../../../get-started/learn-openvino/openvino-samples/hello-classification>`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400

Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:

* `OpenVINO Python API Tutorial <../../notebooks/openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <../../notebooks/hello-world-with-output.html>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things:
* To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`.
* See pre-trained deep learning models on `Hugging Face <https://huggingface.co/OpenVINO>`__.
* Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`.
* Check out the OpenVINO `product home page <https://software.intel.com/en-us/openvino-toolkit>`__.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Additional Resources
* Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`.
* To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`.
* Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`.
* Check out the OpenVINO `product home page <https://software.intel.com/en-us/openvino-toolkit>`__.


Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ your web browser.
Get started with Python
+++++++++++++++++++++++

Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more
Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more
Jupyter Notebooks to get you started with OpenVINO, such as:

* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2025/notebooks/openvino-api-with-output.html>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things:
* To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`.
* See pre-trained deep learning models on `Hugging Face <https://huggingface.co/OpenVINO>`__.
* Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`.
* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`.
* Check out the OpenVINO `product home page <https://software.intel.com/en-us/openvino-toolkit>`__ .


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ need to install additional components. Check the
to see if your case needs any of them.
With the YUM distribution, you can build OpenVINO sample files, as explained in the
:doc:`guide for OpenVINO sample applications <../../../learn-openvino/openvino-samples>`.
:doc:`guide for OpenVINO sample applications <../../../get-started/learn-openvino/openvino-samples>`.
For C++ and C, just run the ``build_samples.sh`` script:
.. tab-set::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@ Additional Resources
* `Google Colab <https://colab.research.google.com/>`__


.. |binder logo| image:: ../assets/images/launch_in_binder.svg
.. |binder logo| image:: ../../assets/images/launch_in_binder.svg
:class: notebook-badge-p
:alt: Binder button
.. |colab logo| image:: ../assets/images/open_in_colab.svg
.. |colab logo| image:: ../../assets/images/open_in_colab.svg
:class: notebook-badge-p
:alt: Google Colab button

Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,4 @@ Additional Resources
####################

* :doc:`Get Started with Samples <openvino-samples/get-started-demos>`
* :doc:`OpenVINO Runtime User Guide <../openvino-workflow/running-inference>`
* :doc:`OpenVINO Runtime User Guide <../../openvino-workflow/running-inference>`
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Benchmark Tool
This page demonstrates how to use the Benchmark Tool to estimate deep learning inference
performance on supported devices. Note that the MULTI plugin mentioned here is considered
a legacy tool and currently is just a mapping of the
:doc:`AUTO plugin <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`.
:doc:`AUTO plugin <../../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`.

.. note::

Expand All @@ -27,13 +27,13 @@ Basic Usage
:sync: python

The Python ``benchmark_app`` is automatically installed when you install OpenVINO
using :doc:`PyPI <../../get-started/install-openvino/install-openvino-pip>`.
using :doc:`PyPI <../../../get-started/install-openvino/install-openvino-pip>`.
Before running ``benchmark_app``, make sure the ``openvino_env`` virtual
environment is activated, and navigate to the directory where your model is located.

The benchmarking application works with models in the OpenVINO IR
(``model.xml`` and ``model.bin``) and ONNX (``model.onnx``) formats.
Make sure to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>`
Make sure to :doc:`convert your models <../../../openvino-workflow/model-preparation/convert-model-to-ir>`
if necessary.

To run benchmarking with default options on a model, use the following command:
Expand All @@ -59,7 +59,7 @@ Basic Usage

The benchmarking application works with models in the OpenVINO IR, TensorFlow,
TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it,
OpenVINO also allows you to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>`.
OpenVINO also allows you to :doc:`convert your models <../../../openvino-workflow/model-preparation/convert-model-to-ir>`.

To run benchmarking with default options on a model, use the following command:

Expand Down Expand Up @@ -182,10 +182,10 @@ parallel inference requests to utilize all the threads available on the device.
On GPU, it automatically sets the inference batch size to fill up the GPU memory available.

For more information on performance hints, see the
:doc:`High-level Performance Hints <../../openvino-workflow/running-inference/optimize-inference/high-level-performance-hints>` page.
:doc:`High-level Performance Hints <../../../openvino-workflow/running-inference/optimize-inference/high-level-performance-hints>` page.
For more details on optimal runtime configurations and how they are automatically
determined using performance hints, see
:doc:`Runtime Inference Optimizations <../../openvino-workflow/running-inference/optimize-inference>`.
:doc:`Runtime Inference Optimizations <../../../openvino-workflow/running-inference/optimize-inference>`.


Device
Expand Down Expand Up @@ -220,7 +220,7 @@ You may also specify ``AUTO`` as the device, in which case the ``benchmark_app``
automatically select the best device for benchmarking and support it with the
CPU at the model loading stage. This may result in increased performance, thus,
should be used purposefully. For more information, see the
:doc:`Automatic device selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` page.
:doc:`Automatic device selection <../../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` page.

.. note::

Expand Down Expand Up @@ -934,4 +934,4 @@ Additional Resources
- :doc:`Get Started with Samples <get-started-demos>`
- :doc:`Using OpenVINO Samples <../openvino-samples>`
- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>`
- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>`
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ resulting model, downloads a dataset and runs a benchmark on the dataset.


You can see the explicit description of each sample step at
:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
section of "Integrate OpenVINO™ Runtime with Your Application" guide.

Running
Expand Down Expand Up @@ -60,8 +60,8 @@ The sample outputs how long it takes to process a dataset.
Additional Resources
####################

- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
- :doc:`Get Started with Samples <get-started-demos>`
- :doc:`Using OpenVINO Samples <../openvino-samples>`
- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>`
- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>`
- `Bert Benchmark Python Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/python/benchmark/bert_benchmark/README.md>`__
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ Get Started with Samples

To use OpenVINO samples, install OpenVINO using one of the following distributions:

* Archive files (recommended) - :doc:`Linux <../../get-started/install-openvino/install-openvino-archive-linux>` | :doc:`Windows <../../get-started/install-openvino/install-openvino-archive-windows>` | :doc:`macOS <../../get-started/install-openvino/install-openvino-archive-macos>`
* :doc:`APT <../../get-started/install-openvino/install-openvino-apt>` or :doc:`YUM <../../get-started/install-openvino/install-openvino-yum>` for Linux
* :doc:`Docker image <../../get-started/install-openvino/install-openvino-docker-linux>`
* Archive files (recommended) - :doc:`Linux <../../../get-started/install-openvino/install-openvino-archive-linux>` | :doc:`Windows <../../../get-started/install-openvino/install-openvino-archive-windows>` | :doc:`macOS <../../../get-started/install-openvino/install-openvino-archive-macos>`
* :doc:`APT <../../../get-started/install-openvino/install-openvino-apt>` or :doc:`YUM <../../../get-started/install-openvino/install-openvino-yum>` for Linux
* :doc:`Docker image <../../../get-started/install-openvino/install-openvino-docker-linux>`
* `Build from source <https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build.md>`__

If you install OpenVINO Runtime via archive files, sample applications are created in the following directories:
Expand All @@ -23,7 +23,7 @@ If you install OpenVINO Runtime via archive files, sample applications are creat
.. note::
If you install OpenVINO without samples, you can still get them directly from `the OpenVINO repository <https://github.com/openvinotoolkit/openvino/>`__.

Before you build samples, refer to the :doc:`system requirements <../../about-openvino/release-notes-openvino/system-requirements>` page and make sure that all the prerequisites have been installed. Next, you can perform the following steps:
Before you build samples, refer to the :doc:`system requirements <../../../about-openvino/release-notes-openvino/system-requirements>` page and make sure that all the prerequisites have been installed. Next, you can perform the following steps:

1. :ref:`Build Samples <build-samples>`.
2. :ref:`Select a Sample <select-sample>`.
Expand Down Expand Up @@ -409,7 +409,7 @@ The following command shows how to run the Image Classification Code Sample usin

.. note::

* Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps <../../get-started/install-openvino/configurations/configurations-intel-gpu>`, as described earlier on this page.
* Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps <../../../get-started/install-openvino/configurations/configurations-intel-gpu>`, as described earlier on this page.
* Running on GPU is not compatible with macOS.

.. tab-set::
Expand Down Expand Up @@ -469,7 +469,7 @@ The following command shows how to run the Image Classification Code Sample usin
When the sample application is complete, you are given the label and confidence for the top 10 categories. The input image and sample output of the inference results is shown below:

.. image:: ../../assets/images/dog.png
.. image:: ../../../assets/images/dog.png

.. code-block:: sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ inference, and processes output data, logging each step in a standard output str


You can see the explicit description of each sample step at
:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
section of "Integrate OpenVINO™ Runtime with Your Application" guide.

Running
Expand Down Expand Up @@ -94,10 +94,10 @@ To run the sample, you need to specify a model and an image:
application or reconvert your model using model conversion API with
``reverse_input_channels`` argument specified. For more information about
the argument, refer to the **Color Conversion** section of
:doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`.
:doc:`Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`.
- Before running the sample with a trained model, make sure the model is
converted to the intermediate representation (IR) format (\*.xml + \*.bin)
using the :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`.
using the :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>`.
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
- The sample supports NCHW model layout only.

Expand Down Expand Up @@ -254,10 +254,10 @@ Sample Output
Additional Resources
####################

- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>`
- :doc:`Get Started with Samples <get-started-demos>`
- :doc:`Using OpenVINO Samples <../openvino-samples>`
- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>`
- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>`
- `OpenVINO Runtime C API <https://docs.openvino.ai/2025/api/c_cpp_api/group__ov__c__api.html>`__
- `Hello Classification Python Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/README.md>`__
- `Hello Classification C++ Sample on Github <https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/hello_classification/README.md>`__
Expand Down
Loading

0 comments on commit 991260a

Please sign in to comment.