From d877b07c94e01db09cc06ccf6cffe55796294e86 Mon Sep 17 00:00:00 2001 From: Naushir Patuck Date: Thu, 3 Oct 2024 07:53:57 +0100 Subject: [PATCH] ai camera: Fix some typos Links to the source files were wrong and update the helper class names to match the latest code. --- documentation/asciidoc/accessories/ai-camera/details.adoc | 8 ++++---- .../asciidoc/accessories/ai-camera/getting-started.adoc | 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/documentation/asciidoc/accessories/ai-camera/details.adoc b/documentation/asciidoc/accessories/ai-camera/details.adoc index 995bee808..407a48f1f 100644 --- a/documentation/asciidoc/accessories/ai-camera/details.adoc +++ b/documentation/asciidoc/accessories/ai-camera/details.adoc @@ -83,14 +83,14 @@ struct CnnInputTensorInfo { === `rpicam-apps` -`rpicam-apps` provides an IMX500 post-processing stage base class that implements helpers for IMX500 post-processing stages: https://github.com/raspberrypi/rpicam-apps/blob/post_processing_stages/imx500_post_processing_stage.hpp[`IMX500PostProcessingStage`]. Use this base class to derive a new post-processing stage for any neural network model running on the IMX500. For an example, see https://github.com/raspberrypi/rpicam-apps/blob/post_processing_stages/imx500_mobilenet_ssd.cpp[`imx500_mobilenet_ssd.cpp`]: +`rpicam-apps` provides an IMX500 post-processing stage base class that implements helpers for IMX500 post-processing stages: https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_post_processing_stage.hpp[`IMX500PostProcessingStage`]. Use this base class to derive a new post-processing stage for any neural network model running on the IMX500. For an example, see https://github.com/raspberrypi/rpicam-apps/blob/main/post_processing_stages/imx500/imx500_object_detection.cpp[`imx500_object_detection.cpp`]: [source,cpp] ---- -class ObjectInference : public IMX500PostProcessingStage +class ObjectDetection : public IMX500PostProcessingStage { public: - ObjectInference(RPiCamApp *app) : IMX500PostProcessingStage(app) {} + ObjectDetection(RPiCamApp *app) : IMX500PostProcessingStage(app) {} char const *Name() const override; @@ -102,7 +102,7 @@ public: }; ---- -For every frame received by the application, the `Process()` function is called (`ObjectInference::Process()` in the above case). In this function, you can extract the output tensor for further processing or analysis: +For every frame received by the application, the `Process()` function is called (`ObjectDetection::Process()` in the above case). In this function, you can extract the output tensor for further processing or analysis: [source,cpp] ---- diff --git a/documentation/asciidoc/accessories/ai-camera/getting-started.adoc b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc index bb2c19e6e..62ab96702 100644 --- a/documentation/asciidoc/accessories/ai-camera/getting-started.adoc +++ b/documentation/asciidoc/accessories/ai-camera/getting-started.adoc @@ -56,7 +56,7 @@ The MobileNet SSD neural network performs basic object detection, providing boun `imx500_mobilenet_ssd.json` declares a post-processing pipeline that contains two stages: -. `imx500_mobilenet_ssd`, which picks out bounding boxes and confidence values generated by the neural network in the output tensor +. `imx500_object_detection`, which picks out bounding boxes and confidence values generated by the neural network in the output tensor . `object_detect_draw_cv`, which draws bounding boxes and labels on the image The MobileNet SSD tensor requires no significant post-processing on your Raspberry Pi to generate the final output of bounding boxes. All object detection runs directly on the AI Camera.