You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i'm trying to build an application that takes a video feed from a camera (with gstreamer) and makes and inference (object detection) with yolov5 using the NPU, i need 4 istances of this application (for 4 camera in total)
the part of gstreamer works flawlessly because with 4 terminal running the gstreamer pipeline (with hardware acceleration) it doesn't run into a kernel panic.
the problem is when i also run the inference:
with one inference (one camera) it runs without any problem
with two inferences (or more) it goes straight to a kernel panic
the only solution i found on it is to limit the RAM of the board to 4GB, but for me is too little for what i have to do (in the future there will be other things beside the inference)
i'm using rknpu2 to use the NPU.
i know that the issue is generated by the RGA (this explains the weird reason of the 4GB of memory) but i don't know how to fix it. Maybe some kind of dma32 allocation but i have no idea for how to implement it.
is there a fix for my issue?
here are some info:
as you can see i've tried to use the dma allocation from the example but it gave me this error:
i'm also available to modify the source code of the kernel if needed
The text was updated successfully, but these errors were encountered:
i'm trying to build an application that takes a video feed from a camera (with gstreamer) and makes and inference (object detection) with yolov5 using the NPU, i need 4 istances of this application (for 4 camera in total)
the part of gstreamer works flawlessly because with 4 terminal running the gstreamer pipeline (with hardware acceleration) it doesn't run into a kernel panic.
the problem is when i also run the inference:
the only solution i found on it is to limit the RAM of the board to 4GB, but for me is too little for what i have to do (in the future there will be other things beside the inference)
i'm using rknpu2 to use the NPU.
i know that the issue is generated by the RGA (this explains the weird reason of the 4GB of memory) but i don't know how to fix it. Maybe some kind of dma32 allocation but i have no idea for how to implement it.
is there a fix for my issue?
here are some info:
kernel panic
error of program
code that handles the inference:
`
#pragma once
#include
#include
#include "yolov5.h"
#include "image_utils.h"
#include "file_utils.h"
#include "image_drawing.h"
#include "dma_alloc.hpp"
class YoloModel {
public:
bool loaded = false;
rknn_app_context_t rknn_app_ctx;
YoloModel(std::string model_path) {
int ret;
memset(&rknn_app_ctx, 0, sizeof(rknn_app_context_t));
}
bool infer(cv::Mat cvImage) {
if(!loaded)
return false;
}
};
`
as you can see i've tried to use the dma allocation from the example but it gave me this error:
i'm also available to modify the source code of the kernel if needed
The text was updated successfully, but these errors were encountered: