From 2f96b57d082532b46a34acca13f9ce0d89187949 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 20 Nov 2024 14:49:15 +0100 Subject: [PATCH] [DOCS] post-release polishes no1 --- docs/articles_en/about-openvino/release-notes-openvino.rst | 2 +- .../release-notes-openvino/system-requirements.rst | 4 ++-- .../learn-openvino/llm_inference_guide/genai-guide-npu.rst | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/articles_en/about-openvino/release-notes-openvino.rst b/docs/articles_en/about-openvino/release-notes-openvino.rst index c25540464215f0..6ccab0f0359b06 100644 --- a/docs/articles_en/about-openvino/release-notes-openvino.rst +++ b/docs/articles_en/about-openvino/release-notes-openvino.rst @@ -32,7 +32,7 @@ What's new * New models supported: Llama 3.2 (1B & 3B), Gemma 2 (2B & 9B), and YOLO11. * LLM support on NPU: Llama 3 8B, Llama 2 7B, Mistral-v0.2-7B, Qwen2-7B-Instruct and Phi-3 - Mini-Instruct. + Mini-Instruct. * Noteworthy notebooks added: Sam2, Llama3.2, Llama3.2 - Vision, Wav2Lip, Whisper, and Llava. * Preview: support for Flax, a high-performance Python neural network library based on JAX. Its modular design allows for easy customization and accelerated inference on GPUs. diff --git a/docs/articles_en/about-openvino/release-notes-openvino/system-requirements.rst b/docs/articles_en/about-openvino/release-notes-openvino/system-requirements.rst index a12cacf8402953..79a9f63821c16f 100644 --- a/docs/articles_en/about-openvino/release-notes-openvino/system-requirements.rst +++ b/docs/articles_en/about-openvino/release-notes-openvino/system-requirements.rst @@ -37,7 +37,7 @@ CPU * Ubuntu 20.04 long-term support (LTS), 64-bit (Kernel 5.15+) * macOS 12.6 and above, 64-bit and ARM64 * CentOS 7 - * Red Hat Enterprise Linux 9.3-9.4, 64-bit + * Red Hat Enterprise Linux (RHEL) 8 and 9, 64-bit * openSUSE Tumbleweed, 64-bit and ARM64 * Ubuntu 20.04 ARM64 @@ -65,7 +65,7 @@ GPU * Ubuntu 22.04 long-term support (LTS), 64-bit * Ubuntu 20.04 long-term support (LTS), 64-bit * CentOS 7 - * Red Hat Enterprise Linux 9.3-9.4, 64-bit + * Red Hat Enterprise Linux (RHEL) 8 and 9, 64-bit .. tab-item:: Additional considerations diff --git a/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide-npu.rst b/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide-npu.rst index 95e5a9d343c0c8..9eb33b89d796c9 100644 --- a/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide-npu.rst +++ b/docs/articles_en/learn-openvino/llm_inference_guide/genai-guide-npu.rst @@ -20,8 +20,8 @@ Install required dependencies: pip install nncf==2.12 onnx==1.16.1 optimum-intel==1.19.0 pip install openvino==2024.5 openvino-tokenizers==2024.5 openvino-genai==2024.5 -NOTE that for systems based on Intel® Core Ultra Processors Series 2 and 16 GB of RAM, -prompts longer then 1024 characters will not work with a model of 7B or more parameters, +Note that for systems based on Intel® Core™ Ultra Processors Series 2, more than 16GB of RAM may +be required to run prompts over 1024 tokens on models exceeding 7B parameters, such as Llama-2-7B, Mistral-0.2-7B, and Qwen-2-7B. Export an LLM model via Hugging Face Optimum-Intel