Skip to content

Commit

Permalink
Update docs/articles_en/openvino-workflow/running-inference/optimize-…
Browse files Browse the repository at this point in the history
…inference/optimizing-latency/model-caching-overview.rst
  • Loading branch information
sgolebiewski-intel authored Dec 12, 2024
1 parent 8a22493 commit 3ce9335
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ To check in advance if a particular device supports model caching, your applicat
Set "cache_encryption_callbacks" config option to enable cache encryption
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

If model caching is enabled in the CPU Plugin, the model topology can be encrypted while it is saved to the cache and decrypted when it is loaded from the cache. This property can currently be set only in ``compile_model``.
If model caching is enabled in the CPU Plugin, the model topology can be encrypted while it is saved to the cache and decrypted when it is loaded from the cache. Currently, this property can be set only in ``compile_model``.

.. tab-set::

Expand Down

0 comments on commit 3ce9335

Please sign in to comment.