Skip to content

Commit

Permalink
Merge branch 'master' into ci/gha/debian-arm-cpp-tests
Browse files Browse the repository at this point in the history
  • Loading branch information
akashchi authored Oct 8, 2024
2 parents 5be408b + 46a6ccd commit 739bdf4
Show file tree
Hide file tree
Showing 27 changed files with 416 additions and 243 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ product better.
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
[Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html).


### Improve documentation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,19 +15,7 @@ Install Intel® Distribution of OpenVINO™ Toolkit from PyPI Repository
* is dedicated to users of all major OSes: Windows, Linux, and macOS
(all x86_64 / arm64 architectures)
* macOS offers support only for CPU inference

| **Simplified Build and Integration**
| The package includes CMake configurations, precompiled static libraries, and headers, which
can be easily accessed through the Python API. You can use the `get_cmake_path()` method to
retrieve the paths to the CMake configurations and libraries:
.. code-block:: python

from openvino import get_cmake_path
cmake_path = get_cmake_path()
For detailed instructions on how to use these configurations in your build setup, check out the
:ref:`Create a library with extensions <create_a_library_with_extensions>` section.

.. tab-set::

Expand All @@ -42,10 +30,13 @@ For detailed instructions on how to use these configurations in your build setup
.. tab-item:: Processor Notes
:sync: processor-notes

| To see if your processor includes the integrated graphics technology and supports iGPU inference, refer to:
| To see if your processor includes the integrated graphics technology and supports iGPU
inference, refer to:
| `Product Specifications <https://ark.intel.com/>`__



Installing OpenVINO Runtime
###########################

Expand Down Expand Up @@ -137,20 +128,47 @@ to see if your case needs any of them.





| **Simplified Build and Integration**
| The package includes CMake configurations, precompiled static libraries, and headers, which
can be easily accessed through the Python API. You can use the `get_cmake_path()` method to
retrieve the paths to the CMake configurations and libraries:
.. code-block:: python
from openvino import get_cmake_path
cmake_path = get_cmake_path()
For detailed instructions on how to use these configurations in your build setup, check out the
:ref:`Create a library with extensions <create_a_library_with_extensions>` section.







What's Next?
####################

Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning applications! Learn more about how to integrate a model in OpenVINO applications by trying out the following tutorials.
Now that you've installed OpenVINO Runtime, you're ready to run your own machine learning
applications! Learn more about how to integrate a model in OpenVINO applications by trying out
the following tutorials.

.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400

Try the `Python Quick Start Example <https://docs.openvino.ai/2024/notebooks/vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Try the `Python Quick Start Example <https://docs.openvino.ai/2024/notebooks/vision-monodepth-with-output.html>`__
to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside
your web browser.


Get started with Python
+++++++++++++++++++++++

Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more
Jupyter Notebooks to get you started with OpenVINO, such as:

* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2024/notebooks/openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2024/notebooks/hello-world-with-output.html>`__
Expand Down
4 changes: 3 additions & 1 deletion docs/sphinx_setup/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,9 @@
'.md': 'markdown',
}

# html_baseurl = 'https://docs.openvino.ai/canonical/'

# html_baseurl = 'https://docs.openvino.ai/2024/'


# -- Sitemap configuration ---------------------------------------------------

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@
# flake8: noqa
# mypy: ignore-errors

import logging
import torch

from openvino.frontend.pytorch.py_pytorch_frontend import _FrontEndPytorchDecoder as Decoder
from openvino.frontend.pytorch.py_pytorch_frontend import _Type as DecoderType
from openvino.runtime import op, PartialShape, Type as OVType, OVAny, Shape
from openvino.runtime import PartialShape, Type as OVType, OVAny, Shape
from openvino.frontend.pytorch.utils import make_constant, fetch_attr, pt_to_ov_type_map, torch_tensor_to_ov_const

import torch

import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,9 @@ def convolution_backward(

return grad_input, grad_weight, grad_bias


if len(get_decompositions([aten._scaled_dot_product_flash_attention.default])) == 0:

@register_decomposition(aten._scaled_dot_product_flash_attention.default)
def scaled_dot_product_flash_attention(
query,
Expand Down Expand Up @@ -101,16 +103,197 @@ def scaled_dot_product_flash_attention(


def get_aot_decomposition_list():
return ([torch.ops.aten._scaled_dot_product_flash_attention.default,
torch.ops.aten._softmax.default,
torch.ops.aten._softmax_backward_data.default,
torch.ops.aten.convolution_backward.default,
torch.ops.aten.gelu_backward.default,
torch.ops.aten.native_group_norm.default,
torch.ops.aten.native_group_norm_backward.default,
torch.ops.aten.native_layer_norm.default,
torch.ops.aten.native_layer_norm_backward.default,
torch.ops.aten.slice_backward.default])
return [
torch.ops.aten._scaled_dot_product_flash_attention.default,
torch.ops.aten._softmax.default,
torch.ops.aten._softmax_backward_data.default,
torch.ops.aten.convolution_backward.default,
torch.ops.aten.gelu_backward.default,
torch.ops.aten.native_group_norm.default,
torch.ops.aten.native_group_norm_backward.default,
torch.ops.aten.native_layer_norm.default,
torch.ops.aten.native_layer_norm_backward.default,
torch.ops.aten.slice_backward.default,
]


def get_inf_decomposition_list():
return ([torch.ops.aten.nll_loss_forward.default])
return [torch.ops.aten.nll_loss_forward.default]


def get_export_decomposition_list():
# List of decompositions from torch._decomp.core_aten_decompositions
# removed _backward ops and ops supported without decomposition
decomp = [
torch.ops.aten.addcdiv,
torch.ops.aten.addcdiv_,
torch.ops.aten.addcmul,
torch.ops.aten.addcmul_,
torch.ops.aten.addr,
torch.ops.aten.affine_grid_generator,
torch.ops.aten.all,
torch.ops.aten.aminmax,
torch.ops.aten.arange.default,
torch.ops.aten.arange.start,
torch.ops.aten.baddbmm,
torch.ops.aten.binary_cross_entropy,
torch.ops.aten.binary_cross_entropy_with_logits,
torch.ops.aten.block_diag,
torch.ops.aten.celu,
torch.ops.aten.celu_,
torch.ops.aten.clamp_max,
torch.ops.aten.clamp_min,
torch.ops.aten.count_nonzero,
torch.ops.aten.linalg_cross,
torch.ops.aten.cudnn_batch_norm,
torch.ops.aten.deg2rad,
torch.ops.aten.deg2rad_,
torch.ops.aten.detach,
torch.ops.aten.diag_embed,
torch.ops.aten.dot,
torch.ops.aten.vdot,
torch.ops.aten.elu,
torch.ops.aten.elu_,
torch.ops.aten._embedding_bag,
torch.ops.aten.empty_like,
torch.ops.aten._euclidean_dist.default,
torch.ops.aten.expand_as,
torch.ops.aten.eye,
torch.ops.aten.fill,
torch.ops.aten.fill_,
torch.ops.aten.floor_divide,
torch.ops.aten.frac,
torch.ops.aten.frac_,
torch.ops.aten._fused_moving_avg_obs_fq_helper,
torch.ops.aten.gelu_,
torch.ops.aten.glu,
torch.ops.aten.hardshrink,
torch.ops.aten.hardsigmoid,
torch.ops.aten.hardsigmoid_,
torch.ops.aten.hardswish,
torch.ops.aten.hardswish_,
torch.ops.aten.hardtanh_,
torch.ops.aten.heaviside,
torch.ops.aten.heaviside_,
torch.ops.aten.huber_loss,
torch.ops.aten.im2col,
torch.ops.aten.index_add,
torch.ops.aten.index_add_,
torch.ops.aten.index_copy,
torch.ops.aten.index_copy_,
torch.ops.aten.index_fill,
torch.ops.aten.index_fill_,
torch.ops.aten.isin,
torch.ops.aten.isneginf,
torch.ops.aten.isposinf,
torch.ops.aten.l1_loss,
torch.ops.aten.leaky_relu_,
torch.ops.aten.lerp,
torch.ops.aten.lerp_,
torch.ops.aten.linspace,
torch.ops.aten.logaddexp,
torch.ops.aten.logaddexp2,
torch.ops.aten.logit,
torch.ops.aten.logit_,
torch.ops.aten.log_sigmoid_forward,
torch.ops.aten.logspace,
torch.ops.aten.logsumexp.default,
torch.ops.aten.masked_fill,
torch.ops.aten.masked_fill_,
torch.ops.aten.mish,
torch.ops.aten.mish_,
torch.ops.aten.mse_loss,
torch.ops.aten.multi_margin_loss,
torch.ops.aten.multilabel_margin_loss_forward,
torch.ops.aten.mv,
torch.ops.aten.mvlgamma,
torch.ops.aten.mvlgamma_,
torch.ops.aten.nansum,
torch.ops.aten.nan_to_num,
torch.ops.aten.nan_to_num_,
torch.ops.aten.narrow,
torch.ops.aten.new_empty,
torch.ops.aten.new_full,
torch.ops.aten.new_ones,
torch.ops.aten.new_zeros,
torch.ops.aten.nll_loss_forward,
torch.ops.aten.norm,
torch.ops.aten.ones,
torch.ops.aten.ones_like,
torch.ops.aten._prelu_kernel,
torch.ops.aten._reshape_alias,
torch.ops.aten.rad2deg,
torch.ops.aten.rad2deg_,
torch.ops.aten.reflection_pad1d,
torch.ops.aten.reflection_pad2d,
torch.ops.aten.reflection_pad3d,
torch.ops.aten.replication_pad1d,
torch.ops.aten.replication_pad2d,
torch.ops.aten.replication_pad3d,
torch.ops.aten.renorm,
torch.ops.aten.renorm_,
torch.ops.aten.resize_as,
torch.ops.aten.roll,
torch.ops.aten.rot90,
torch.ops.aten.rrelu_with_noise,
torch.ops.aten.rrelu_with_noise_,
torch.ops.aten.rsub,
torch.ops.aten.select_scatter,
torch.ops.aten.sgn,
torch.ops.aten.sgn_,
torch.ops.aten.silu,
torch.ops.aten.silu_,
torch.ops.aten.sinc,
torch.ops.aten.sinc_,
torch.ops.aten.smooth_l1_loss,
torch.ops.aten.soft_margin_loss,
torch.ops.aten.softplus,
torch.ops.aten.softshrink,
torch.ops.aten.special_entr,
torch.ops.aten.special_log_ndtr,
torch.ops.aten.special_xlog1py,
torch.ops.aten.split.Tensor,
torch.ops.aten.split_with_sizes_copy,
torch.ops.aten.squeeze.default,
torch.ops.aten.squeeze.dim,
torch.ops.aten.std,
torch.ops.aten.std_mean,
torch.ops.aten.stack,
torch.ops.aten.sum.default,
torch.ops.aten.sum.out,
torch.ops.aten.t,
torch.ops.aten.take,
torch.ops.aten.threshold,
torch.ops.aten.threshold_,
torch.ops.aten.trace,
torch.ops.aten.transpose.int,
torch.ops.aten.tril,
torch.ops.aten.tril_,
torch.ops.aten.triu,
torch.ops.aten.triu_,
torch.ops.aten.unbind,
torch.ops.aten.unfold_copy,
torch.ops.aten._unsafe_index,
torch.ops.aten.unsafe_split.Tensor,
torch.ops.aten.unsafe_split_with_sizes,
torch.ops.aten._unsafe_view,
torch.ops.aten.view_as_complex,
torch.ops.aten.xlogy,
torch.ops.aten.xlogy_,
torch.ops.aten.zero,
torch.ops.aten.zero_,
torch.ops.aten.zeros,
torch.ops.aten.zeros_like,
torch.ops.aten._weight_norm_interface,
]
try:
from packaging import version
if version.parse(torch.__version__) >= version.parse("2.3"):
decomp += [
torch.ops.aten._lazy_clone,
torch.ops.aten._test_parallel_materialize,
torch.ops.aten._chunk_cat,
]
except ImportError:
pass
return decomp
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ bool ov::pass::MOCTransformations::run_on_model(const std::shared_ptr<ov::Model>
// In particular, if zero dim tensor is consumed in body of MultiSubGraphOp
// RemoveConcatZeroDimInput and RemoveMultiSubGraphOpDanglingParamsResults should be called together.
using namespace ov::pass;
REGISTER_PASS(manager, EliminateConvert)
REGISTER_PASS(manager, EliminateScatterUpdate)
REGISTER_PASS(manager, RemoveConcatZeroDimInput)
REGISTER_PASS(manager, EliminateLoopInputsOutputs);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,22 @@ TEST(TransformationTests, TestModelTensorsConsistencyUseShapesTrue) {
EXPECT_TRUE(model->outputs()[0].get_names() == new_tensors);
}

TEST(TransformationTests, MOCConvertElimination) {
auto input = std::make_shared<opset12::Parameter>(element::f32, Shape{1});
auto const_val = opset12::Constant::create(element::f32, Shape{1}, {2});

auto add1 = std::make_shared<opset12::Add>(input, const_val);
auto convert_fp32 = std::make_shared<opset12::Convert>(const_val, element::f32);
auto mul = std::make_shared<opset12::MatMul>(add1, convert_fp32);

auto model = std::make_shared<Model>(NodeVector{mul}, ParameterVector{input});
ov::pass::Manager m;
m.register_pass<ov::pass::MOCTransformations>(false);
m.run_passes(model);

EXPECT_EQ(count_ops_of_type<opset12::Constant>(model), 1);
}

TEST(TransformationTests, TestModelTensorsConsistencyUseShapesFalse) {
auto input = std::make_shared<opset12::Parameter>(element::f32, Shape{1});
auto const1 = opset12::Constant::create(element::f32, Shape{1}, {1});
Expand Down
Loading

0 comments on commit 739bdf4

Please sign in to comment.