From 862eefc2b3641d930323a6449993d66b9888bf7f Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 30 Sep 2024 04:40:18 +0100 Subject: [PATCH 1/3] Fix model conversion steps a tiny bit --- .../asciidoc/accessories/ai-camera/model-conversion.adoc | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc b/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc index acf4c4ae6..78cb13cb6 100644 --- a/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc +++ b/documentation/asciidoc/accessories/ai-camera/model-conversion.adoc @@ -31,11 +31,9 @@ The Model Compression Toolkit generates a quantised model in the following forma * Keras (TensorFlow) * ONNX (PyTorch) - === Conversion -To convert a model, first install the converter tools. - +To convert a model, first install the converter tools: [tabs%sync] ====== @@ -96,7 +94,7 @@ To package the model into an RPK file, run the following command: [source,console] ---- -imx500-package.sh -i -o +$ imx500-package.sh -i -o ---- This command should create a file named `network.rpk` in the output folder. You'll pass the name of this file to your IMX500 camera applications. From bce67e5a29e0ecadffbb38a8f964082bf0edaad5 Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 30 Sep 2024 04:41:35 +0100 Subject: [PATCH 2/3] Update for consistency in term usage --- documentation/asciidoc/accessories/ai-camera/details.adoc | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/documentation/asciidoc/accessories/ai-camera/details.adoc b/documentation/asciidoc/accessories/ai-camera/details.adoc index 6b70b1360..995bee808 100644 --- a/documentation/asciidoc/accessories/ai-camera/details.adoc +++ b/documentation/asciidoc/accessories/ai-camera/details.adoc @@ -9,15 +9,15 @@ image::images/imx500-comparison.svg[Traditional versus IMX500 AI camera systems] The left side demonstrates the architecture of a traditional AI camera system. In such a system, the camera delivers images to the Raspberry Pi. The Raspberry Pi processes the images and then performs AI inference. Traditional systems may use external AI accelerators (as shown) or rely exclusively on the CPU. -The right side demonstrates the architecture of a system that uses IMX500. The camera module contains a small Image Signal Processor (ISP) which turns the raw camera image data into an _input tensor_. The camera module sends this tensor directly into the AI accelerator within the camera, which produces an _output tensor_ that contains the inferencing results. The AI accelerator sends this tensor to the Raspberry Pi. There is no need for an external accelerator, nor for the Raspberry Pi to run neural network software on the CPU. +The right side demonstrates the architecture of a system that uses IMX500. The camera module contains a small Image Signal Processor (ISP) which turns the raw camera image data into an **input tensor**. The camera module sends this tensor directly into the AI accelerator within the camera, which produces an **output tensor** that contains the inferencing results. The AI accelerator sends this tensor to the Raspberry Pi. There is no need for an external accelerator, nor for the Raspberry Pi to run neural network software on the CPU. To fully understand this system, familiarise yourself with the following concepts: -The _Input Tensor_:: The part of the sensor image passed to the AI engine for inferencing. Produced by a small on-board ISP which also crops and scales the camera image to the dimensions expected by the neural network that has been loaded. The input tensor is not normally made available to applications, though it is possible to access it for debugging purposes. +Input Tensor:: The part of the sensor image passed to the AI engine for inferencing. Produced by a small on-board ISP which also crops and scales the camera image to the dimensions expected by the neural network that has been loaded. The input tensor is not normally made available to applications, though it is possible to access it for debugging purposes. -The _Region of Interest_ (ROI):: Specifies exactly which part of the sensor image is cropped out before being rescaled to the size demanded by the neural network. Can be queried and set by an application. The units used are always pixels in the full resolution sensor output. The default ROI setting uses the full image received from the sensor, cropping no data. +Region of Interest (ROI):: Specifies exactly which part of the sensor image is cropped out before being rescaled to the size demanded by the neural network. Can be queried and set by an application. The units used are always pixels in the full resolution sensor output. The default ROI setting uses the full image received from the sensor, cropping no data. -The _Output Tensor_:: The results of inferencing performed by the neural network. The precise number and shape of the outputs depend on the neural network. Application code must understand how to handle the tensor. +Output Tensor:: The results of inferencing performed by the neural network. The precise number and shape of the outputs depend on the neural network. Application code must understand how to handle the tensor. === System Architecture From 21b99dfbf5f573e90d55fda80ba4942552b148ad Mon Sep 17 00:00:00 2001 From: nate contino Date: Mon, 30 Sep 2024 04:42:00 +0100 Subject: [PATCH 3/3] Update about.adoc --- documentation/asciidoc/accessories/ai-camera/about.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/asciidoc/accessories/ai-camera/about.adoc b/documentation/asciidoc/accessories/ai-camera/about.adoc index af1802e7d..9c7a5deec 100644 --- a/documentation/asciidoc/accessories/ai-camera/about.adoc +++ b/documentation/asciidoc/accessories/ai-camera/about.adoc @@ -1,7 +1,7 @@ [[ai-camera]] == About -The Raspberry Pi AI Camera uses the Sony IMX500 imaging sensor to provide low-latency and high-performance AI capabilities to any camera application. The tight integration with https://www.raspberrypi.com/documentation/computers/camera_software.adoc[Raspberry Pi's camera software stack] allows users to deploy their own neural network models with minimal effort. +The Raspberry Pi AI Camera uses the Sony IMX500 imaging sensor to provide low-latency and high-performance AI capabilities to any camera application. Tight integration with https://www.raspberrypi.com/documentation/computers/camera_software.adoc[Raspberry Pi's camera software stack] allows users to deploy their own neural network models with minimal effort. image::images/ai-camera.png[The Raspberry Pi AI Camera]