diff --git a/docs/articles_en/about-openvino.rst b/docs/articles_en/about-openvino.rst index 48422d9b3a7ae9..50251c9e811327 100644 --- a/docs/articles_en/about-openvino.rst +++ b/docs/articles_en/about-openvino.rst @@ -40,7 +40,7 @@ OpenVINO Ecosystem Along with the primary components of model optimization and runtime, the toolkit also includes: * `Neural Network Compression Framework (NNCF) `__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop. -* :doc:`Openvino Notebooks `- Jupyter Python notebook, which demonstrate key features of the toolkit. +* :doc:`Openvino Notebooks `- Jupyter Python notebook, which demonstrate key features of the toolkit. * `OpenVINO Model Server `__ - a server that enables scalability via a serving microservice. * :doc:`OpenVINO Training Extensions ` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. * :doc:`Dataset Management Framework (Datumaro) ` - a tool to build, transform, and analyze datasets. diff --git a/docs/articles_en/documentation/openvino-extensibility.rst b/docs/articles_en/documentation/openvino-extensibility.rst index 3f3c7e697ab4a7..0e9042c7578839 100644 --- a/docs/articles_en/documentation/openvino-extensibility.rst +++ b/docs/articles_en/documentation/openvino-extensibility.rst @@ -187,6 +187,6 @@ See Also ######## * :doc:`OpenVINO Transformations ` -* :doc:`Using OpenVINO Runtime Samples <../learn-openvino/openvino-samples>` +* :doc:`Using OpenVINO Runtime Samples <../get-started/learn-openvino/openvino-samples>` * :doc:`Hello Shape Infer SSD sample <../get-started/learn-openvino/openvino-samples/hello-reshape-ssd>` diff --git a/docs/articles_en/documentation/openvino-security.rst b/docs/articles_en/documentation/openvino-security.rst index 03a99ba49e89e2..94c3630e0fa1ac 100644 --- a/docs/articles_en/documentation/openvino-security.rst +++ b/docs/articles_en/documentation/openvino-security.rst @@ -86,4 +86,4 @@ Additional Resources - Intel® Distribution of OpenVINO™ toolkit `home page `__. - :doc:`Convert a Model <../openvino-workflow/model-preparation/convert-model-to-ir>`. - :doc:`OpenVINO™ Runtime User Guide <../openvino-workflow/running-inference>`. -- For more information on Sample Applications, see the :doc:`OpenVINO Samples Overview <../learn-openvino/openvino-samples>` +- For more information on Sample Applications, see the :doc:`OpenVINO Samples Overview <../get-started/learn-openvino/openvino-samples>` diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-apt.rst b/docs/articles_en/get-started/install-openvino/install-openvino-apt.rst index 7612fc9694f5f0..726e2872bdae0e 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-apt.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-apt.rst @@ -156,7 +156,7 @@ need to install additional components. Check the to see if your case needs any of them. With the APT distribution, you can build OpenVINO sample files, as explained in the -:doc:`guide for OpenVINO sample applications <../../../learn-openvino/openvino-samples>`. +:doc:`guide for OpenVINO sample applications <../../../get-started/learn-openvino/openvino-samples>`. For C++ and C, just run the ``build_samples.sh`` script: .. tab-set:: diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst index e7c3d2b3b7821f..fca179d815fca7 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-linux.rst @@ -224,7 +224,7 @@ Learn more about how to integrate a model in OpenVINO applications by trying out .. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif :width: 400 - Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: + Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: * `OpenVINO Python API Tutorial <../../notebooks/openvino-api-with-output.html>`__ * `Basic image classification program with Hello Image Classification <../../notebooks/hello-world-with-output.html>`__ @@ -240,7 +240,7 @@ Learn more about how to integrate a model in OpenVINO applications by trying out .. image:: https://user-images.githubusercontent.com/36741649/127170593-86976dc3-e5e4-40be-b0a6-206379cd7df5.jpg :width: 400 - Visit the :doc:`Samples <../../../learn-openvino/openvino-samples>` page for other C++ example applications to get you started with OpenVINO, such as: + Visit the :doc:`Samples <../../../get-started/learn-openvino/openvino-samples>` page for other C++ example applications to get you started with OpenVINO, such as: * :doc:`Basic object detection with the Hello Reshape SSD C++ sample <../../../get-started/learn-openvino/openvino-samples/hello-reshape-ssd>` * :doc:`Object classification sample <../../../get-started/learn-openvino/openvino-samples/hello-classification>` diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst index f1d478c764b308..89aab67b231c41 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-archive-macos.rst @@ -145,7 +145,7 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine .. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif :width: 400 - Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: + Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: * `OpenVINO Python API Tutorial <../../notebooks/openvino-api-with-output.html>`__ * `Basic image classification program with Hello Image Classification <../../notebooks/hello-world-with-output.html>`__ diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst b/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst index 033d3a80e5d57a..ce4655a8e2cf72 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-brew.rst @@ -61,7 +61,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things: * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. * See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. -* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. +* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`. * Check out the OpenVINO `product home page `__. diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-conan.rst b/docs/articles_en/get-started/install-openvino/install-openvino-conan.rst index 06557003b3cbf6..b8c10c9a2a00ac 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-conan.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-conan.rst @@ -72,7 +72,7 @@ Additional Resources * Learn more about :doc:`OpenVINO Workflow <../../../openvino-workflow>`. * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. -* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. +* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`. * Check out the OpenVINO `product home page `__. diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst b/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst index df276a5aae4a1d..bf0965e3edf7fc 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst @@ -149,7 +149,7 @@ your web browser. Get started with Python +++++++++++++++++++++++ -Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more +Visit the :doc:`Tutorials <../../../get-started/learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: * `OpenVINO Python API Tutorial `__ diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst b/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst index 7e07c0871bf7d7..8dcff56b351557 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-vcpkg.rst @@ -83,7 +83,7 @@ Now that you've installed OpenVINO Runtime, you can try the following things: * To prepare your models for working with OpenVINO, see :doc:`Model Preparation <../../../openvino-workflow/model-preparation>`. * See pre-trained deep learning models on `Hugging Face `__. * Learn more about :doc:`Inference with OpenVINO Runtime <../../../openvino-workflow/running-inference>`. -* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../learn-openvino/openvino-samples>`. +* See sample applications in :doc:`OpenVINO toolkit Samples Overview <../../../get-started/learn-openvino/openvino-samples>`. * Check out the OpenVINO `product home page `__ . diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst b/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst index 0d3b9f00afce63..b2d1870376b444 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-yum.rst @@ -117,7 +117,7 @@ need to install additional components. Check the to see if your case needs any of them. With the YUM distribution, you can build OpenVINO sample files, as explained in the -:doc:`guide for OpenVINO sample applications <../../../learn-openvino/openvino-samples>`. +:doc:`guide for OpenVINO sample applications <../../../get-started/learn-openvino/openvino-samples>`. For C++ and C, just run the ``build_samples.sh`` script: .. tab-set:: diff --git a/docs/articles_en/get-started/learn-openvino/interactive-tutorials-python.rst b/docs/articles_en/get-started/learn-openvino/interactive-tutorials-python.rst index 637c22aa4587a1..f8b50e489488fa 100644 --- a/docs/articles_en/get-started/learn-openvino/interactive-tutorials-python.rst +++ b/docs/articles_en/get-started/learn-openvino/interactive-tutorials-python.rst @@ -46,10 +46,10 @@ Additional Resources * `Google Colab `__ -.. |binder logo| image:: ../assets/images/launch_in_binder.svg +.. |binder logo| image:: ../../assets/images/launch_in_binder.svg :class: notebook-badge-p :alt: Binder button -.. |colab logo| image:: ../assets/images/open_in_colab.svg +.. |colab logo| image:: ../../assets/images/open_in_colab.svg :class: notebook-badge-p :alt: Google Colab button diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples.rst index a2f99addbbaf01..fdb14fe635a7d4 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples.rst @@ -86,4 +86,4 @@ Additional Resources #################### * :doc:`Get Started with Samples ` -* :doc:`OpenVINO Runtime User Guide <../openvino-workflow/running-inference>` +* :doc:`OpenVINO Runtime User Guide <../../openvino-workflow/running-inference>` diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/benchmark-tool.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/benchmark-tool.rst index cde0eef055d5cb..3f841f624d7f77 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/benchmark-tool.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/benchmark-tool.rst @@ -11,7 +11,7 @@ Benchmark Tool This page demonstrates how to use the Benchmark Tool to estimate deep learning inference performance on supported devices. Note that the MULTI plugin mentioned here is considered a legacy tool and currently is just a mapping of the -:doc:`AUTO plugin <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`. +:doc:`AUTO plugin <../../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>`. .. note:: @@ -27,13 +27,13 @@ Basic Usage :sync: python The Python ``benchmark_app`` is automatically installed when you install OpenVINO - using :doc:`PyPI <../../get-started/install-openvino/install-openvino-pip>`. + using :doc:`PyPI <../../../get-started/install-openvino/install-openvino-pip>`. Before running ``benchmark_app``, make sure the ``openvino_env`` virtual environment is activated, and navigate to the directory where your model is located. The benchmarking application works with models in the OpenVINO IR (``model.xml`` and ``model.bin``) and ONNX (``model.onnx``) formats. - Make sure to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>` + Make sure to :doc:`convert your models <../../../openvino-workflow/model-preparation/convert-model-to-ir>` if necessary. To run benchmarking with default options on a model, use the following command: @@ -59,7 +59,7 @@ Basic Usage The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, - OpenVINO also allows you to :doc:`convert your models <../../openvino-workflow/model-preparation/convert-model-to-ir>`. + OpenVINO also allows you to :doc:`convert your models <../../../openvino-workflow/model-preparation/convert-model-to-ir>`. To run benchmarking with default options on a model, use the following command: @@ -182,10 +182,10 @@ parallel inference requests to utilize all the threads available on the device. On GPU, it automatically sets the inference batch size to fill up the GPU memory available. For more information on performance hints, see the -:doc:`High-level Performance Hints <../../openvino-workflow/running-inference/optimize-inference/high-level-performance-hints>` page. +:doc:`High-level Performance Hints <../../../openvino-workflow/running-inference/optimize-inference/high-level-performance-hints>` page. For more details on optimal runtime configurations and how they are automatically determined using performance hints, see -:doc:`Runtime Inference Optimizations <../../openvino-workflow/running-inference/optimize-inference>`. +:doc:`Runtime Inference Optimizations <../../../openvino-workflow/running-inference/optimize-inference>`. Device @@ -220,7 +220,7 @@ You may also specify ``AUTO`` as the device, in which case the ``benchmark_app`` automatically select the best device for benchmarking and support it with the CPU at the model loading stage. This may result in increased performance, thus, should be used purposefully. For more information, see the -:doc:`Automatic device selection <../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` page. +:doc:`Automatic device selection <../../../openvino-workflow/running-inference/inference-devices-and-modes/auto-device-selection>` page. .. note:: @@ -934,4 +934,4 @@ Additional Resources - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/bert-benchmark.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/bert-benchmark.rst index 13f18fc3272b34..459dbcdd5817a4 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/bert-benchmark.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/bert-benchmark.rst @@ -26,7 +26,7 @@ resulting model, downloads a dataset and runs a benchmark on the dataset. You can see the explicit description of each sample step at -:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running @@ -60,8 +60,8 @@ The sample outputs how long it takes to process a dataset. Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Bert Benchmark Python Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/get-started-demos.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/get-started-demos.rst index 64c0b16f7f9725..f61ccf5cacd2f3 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/get-started-demos.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/get-started-demos.rst @@ -9,9 +9,9 @@ Get Started with Samples To use OpenVINO samples, install OpenVINO using one of the following distributions: -* Archive files (recommended) - :doc:`Linux <../../get-started/install-openvino/install-openvino-archive-linux>` | :doc:`Windows <../../get-started/install-openvino/install-openvino-archive-windows>` | :doc:`macOS <../../get-started/install-openvino/install-openvino-archive-macos>` -* :doc:`APT <../../get-started/install-openvino/install-openvino-apt>` or :doc:`YUM <../../get-started/install-openvino/install-openvino-yum>` for Linux -* :doc:`Docker image <../../get-started/install-openvino/install-openvino-docker-linux>` +* Archive files (recommended) - :doc:`Linux <../../../get-started/install-openvino/install-openvino-archive-linux>` | :doc:`Windows <../../../get-started/install-openvino/install-openvino-archive-windows>` | :doc:`macOS <../../../get-started/install-openvino/install-openvino-archive-macos>` +* :doc:`APT <../../../get-started/install-openvino/install-openvino-apt>` or :doc:`YUM <../../../get-started/install-openvino/install-openvino-yum>` for Linux +* :doc:`Docker image <../../../get-started/install-openvino/install-openvino-docker-linux>` * `Build from source `__ If you install OpenVINO Runtime via archive files, sample applications are created in the following directories: @@ -23,7 +23,7 @@ If you install OpenVINO Runtime via archive files, sample applications are creat .. note:: If you install OpenVINO without samples, you can still get them directly from `the OpenVINO repository `__. -Before you build samples, refer to the :doc:`system requirements <../../about-openvino/release-notes-openvino/system-requirements>` page and make sure that all the prerequisites have been installed. Next, you can perform the following steps: +Before you build samples, refer to the :doc:`system requirements <../../../about-openvino/release-notes-openvino/system-requirements>` page and make sure that all the prerequisites have been installed. Next, you can perform the following steps: 1. :ref:`Build Samples `. 2. :ref:`Select a Sample `. @@ -409,7 +409,7 @@ The following command shows how to run the Image Classification Code Sample usin .. note:: - * Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps <../../get-started/install-openvino/configurations/configurations-intel-gpu>`, as described earlier on this page. + * Running inference on Intel® Processor Graphics (GPU) requires :doc:`additional hardware configuration steps <../../../get-started/install-openvino/configurations/configurations-intel-gpu>`, as described earlier on this page. * Running on GPU is not compatible with macOS. .. tab-set:: @@ -469,7 +469,7 @@ The following command shows how to run the Image Classification Code Sample usin When the sample application is complete, you are given the label and confidence for the top 10 categories. The input image and sample output of the inference results is shown below: -.. image:: ../../assets/images/dog.png +.. image:: ../../../assets/images/dog.png .. code-block:: sh diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-classification.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-classification.rst index 219365e2bc0d7f..482ec739b40664 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-classification.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-classification.rst @@ -50,7 +50,7 @@ inference, and processes output data, logging each step in a standard output str You can see the explicit description of each sample step at -:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running @@ -94,10 +94,10 @@ To run the sample, you need to specify a model and an image: application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the **Color Conversion** section of - :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. + :doc:`Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using the :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. + using the :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. - The sample supports NCHW model layout only. @@ -254,10 +254,10 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `OpenVINO Runtime C API `__ - `Hello Classification Python Sample on Github `__ - `Hello Classification C++ Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-nv12-input-classification.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-nv12-input-classification.rst index 3298a8625e6bfe..3aa8b0a10fb996 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-nv12-input-classification.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-nv12-input-classification.rst @@ -45,7 +45,7 @@ You can place labels in ``.labels`` file near the model to get pretty output. You can see the explicit description of each sample step at -:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running @@ -96,10 +96,10 @@ the following command, you can convert an ordinary image to an uncompressed NV12 you trained your model to work with RGB order, you need to reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the - **Color Conversion** section of :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. + **Color Conversion** section of :doc:`Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using the :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. + using the :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. Example @@ -205,10 +205,10 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `API Reference `__ - `Hello NV12 Input Classification C++ Sample on Github `__ - `Hello NV12 Input Classification C Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-query-device.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-query-device.rst index 46f145a808e330..c14012526ea7be 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-query-device.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-query-device.rst @@ -9,7 +9,7 @@ Hello Query Device Sample This sample demonstrates how to show OpenVINO™ Runtime devices and prints their -metrics and default configuration values using :doc:`Query Device API feature <../../openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties>`. +metrics and default configuration values using :doc:`Query Device API feature <../../../openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties>`. To build the sample, use instructions available at :ref:`Build the Sample Applications ` section in "Get Started with Samples" guide. @@ -130,7 +130,7 @@ For example: Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO™ Toolkit Samples <../openvino-samples>` - `Hello Query Device Python Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-reshape-ssd.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-reshape-ssd.rst index 0e929bb5ed2701..5f33b6b6b32e84 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-reshape-ssd.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/hello-reshape-ssd.rst @@ -9,7 +9,7 @@ Hello Reshape SSD Sample This sample demonstrates how to do synchronous inference of object detection models -using :doc:`Shape Inference feature <../../openvino-workflow/running-inference/changing-input-shape>`. Before +using :doc:`Shape Inference feature <../../../openvino-workflow/running-inference/changing-input-shape>`. Before using the sample, refer to the following requirements: - Models with only one input and output are supported. @@ -46,7 +46,7 @@ As a result, the program creates an output image, logging each step in a standar You can see the explicit description of -each sample step at :doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. +each sample step at :doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running #################### @@ -84,10 +84,10 @@ To run the sample, you need to specify a model and an image: reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the **Color Conversion** section of - :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. + :doc:`Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) - using :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. + using :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. Example @@ -201,10 +201,10 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Hello Reshape SSD Python Sample on Github `__ - `Hello Reshape SSD C++ Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/image-classification-async.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/image-classification-async.rst index d88b950463210d..96fc49c2f08645 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/image-classification-async.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/image-classification-async.rst @@ -56,7 +56,7 @@ You can place labels in ``.labels`` file near the model to get pretty output. You can see the explicit description of each sample step at -:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. @@ -129,9 +129,9 @@ To run the sample, you need to specify a model and an image: .. note:: - - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the **Color Conversion** section of :doc:`Preprocessing API <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. + - By default, OpenVINO™ Toolkit Samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using model conversion API with ``reverse_input_channels`` argument specified. For more information about the argument, refer to the **Color Conversion** section of :doc:`Preprocessing API <../../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details>`. - - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>`. + - Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>`. - The sample accepts models in ONNX format (.onnx) that do not require preprocessing. @@ -323,9 +323,9 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO™ Toolkit Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Image Classification Async Python Sample on Github `__ - `Image Classification Async C++ Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/model-creation.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/model-creation.rst index ad01cee53a69b1..87853f5bbf15a2 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/model-creation.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/model-creation.rst @@ -8,7 +8,7 @@ Model Creation Sample Inference Request API (Python, C++). -This sample demonstrates how to run inference using a :doc:`model <../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` +This sample demonstrates how to run inference using a :doc:`model <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. Before using the sample, @@ -23,7 +23,7 @@ refer to the following requirements: How It Works #################### -At startup, the sample application reads command-line parameters, :doc:`builds a model <../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` +At startup, the sample application reads command-line parameters, :doc:`builds a model <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation>` and passes the weights file. Then, it loads the model and input data to the OpenVINO™ Runtime plugin. Finally, it performs synchronous inference and processes output data, logging each step in a standard output stream. @@ -47,7 +47,7 @@ data, logging each step in a standard output stream. :language: cpp -You can see the explicit description of each sample step at :doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. +You can see the explicit description of each sample step at :doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running #################### @@ -76,7 +76,7 @@ To run the sample, you need to specify model weights and a device. - This sample supports models with FP32 weights only. - The ``lenet.bin`` weights file is generated by - :doc:`model conversion API <../../openvino-workflow/model-preparation/convert-model-to-ir>` + :doc:`model conversion API <../../../openvino-workflow/model-preparation/convert-model-to-ir>` from the public LeNet model, with the ``input_shape [64,1,28,28]`` parameter specified. - The original model is available in the `Caffe repository `__ on GitHub. @@ -289,9 +289,9 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Model Creation Python Sample on Github `__ - `Model Creation C++ Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/sync-benchmark.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/sync-benchmark.rst index ccaa1f03a35552..f9643855dfd91d 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/sync-benchmark.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/sync-benchmark.rst @@ -45,7 +45,7 @@ Then, it processes and reports performance results. You can see the explicit description of -each sample step at :doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +each sample step at :doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running @@ -162,9 +162,9 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Sync Benchmark Python Sample on Github `__ - `Sync Benchmark C++ Sample on Github `__ diff --git a/docs/articles_en/get-started/learn-openvino/openvino-samples/throughput-benchmark.rst b/docs/articles_en/get-started/learn-openvino/openvino-samples/throughput-benchmark.rst index 4632fab82bd0ea..8baabc49998482 100644 --- a/docs/articles_en/get-started/learn-openvino/openvino-samples/throughput-benchmark.rst +++ b/docs/articles_en/get-started/learn-openvino/openvino-samples/throughput-benchmark.rst @@ -49,7 +49,7 @@ Then, it processes and reports performance results. You can see the explicit description of each sample step at -:doc:`Integration Steps <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +:doc:`Integration Steps <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` section of "Integrate OpenVINO™ Runtime with Your Application" guide. Running @@ -167,9 +167,9 @@ Sample Output Additional Resources #################### -- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` +- :doc:`Integrate the OpenVINO™ Runtime with Your Application <../../../openvino-workflow/running-inference/integrate-openvino-with-your-application>` - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` -- :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` +- :doc:`Convert a Model <../../../openvino-workflow/model-preparation/convert-model-to-ir>` - `Throughput Benchmark Python Sample on Github `__ - `Throughput Benchmark C++ Sample on Github `__ diff --git a/docs/articles_en/openvino-workflow/model-preparation/convert-model-onnx.rst b/docs/articles_en/openvino-workflow/model-preparation/convert-model-onnx.rst index 9d1f0a3e0d754a..e2a968f7107630 100644 --- a/docs/articles_en/openvino-workflow/model-preparation/convert-model-onnx.rst +++ b/docs/articles_en/openvino-workflow/model-preparation/convert-model-onnx.rst @@ -69,5 +69,5 @@ Additional Resources #################### Check out more examples of model conversion in -:doc:`interactive Python tutorials <../../learn-openvino/interactive-tutorials-python>`. +:doc:`interactive Python tutorials <../../get-started/learn-openvino/interactive-tutorials-python>`. diff --git a/docs/articles_en/openvino-workflow/model-preparation/convert-model-paddle.rst b/docs/articles_en/openvino-workflow/model-preparation/convert-model-paddle.rst index b91af11c012566..2808a18e6759e2 100644 --- a/docs/articles_en/openvino-workflow/model-preparation/convert-model-paddle.rst +++ b/docs/articles_en/openvino-workflow/model-preparation/convert-model-paddle.rst @@ -160,5 +160,5 @@ Additional Resources #################### Check out more examples of model conversion in -:doc:`interactive Python tutorials <../../learn-openvino/interactive-tutorials-python>`. +:doc:`interactive Python tutorials <../../get-started/learn-openvino/interactive-tutorials-python>`. diff --git a/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst b/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst index fa1b6b733bb548..9a00aee059f6d2 100644 --- a/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst +++ b/docs/articles_en/openvino-workflow/model-preparation/convert-model-pytorch.rst @@ -97,7 +97,7 @@ inference in the existing PyTorch application to OpenVINO and how to get value f category_name = weights.meta["categories"][class_id] print(f"{category_name}: {100 * score:.1f}% (with OpenVINO)") -Check out more examples in :doc:`interactive Python tutorials <../../learn-openvino/interactive-tutorials-python>`. +Check out more examples in :doc:`interactive Python tutorials <../../get-started/learn-openvino/interactive-tutorials-python>`. .. note:: diff --git a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst index d3d5b0cce92d89..8381112683b52f 100644 --- a/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst +++ b/docs/articles_en/openvino-workflow/running-inference/integrate-openvino-with-your-application.rst @@ -415,10 +415,10 @@ For details on additional CMake build options, refer to the `CMake page `__ -* See the :doc:`OpenVINO Samples <../../learn-openvino/openvino-samples>` page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others. +* See the :doc:`OpenVINO Samples <../../get-started/learn-openvino/openvino-samples>` page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others. * Models in the OpenVINO IR format on `Hugging Face `__. * :doc:`OpenVINO™ Runtime Preprocessing ` * :doc:`String Tensors `