-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: High RAM usage in iGPU #28009
Comments
Hi @yaniv5678 , how did you check the memory usage when compiling the model with CPU and GPU? Meanwhile, GPU performance relies on the OpenCL kernels for the implementation. You can refer GPU Performance Checklist. |
Hi @Aznie-Intel! Thanks for your prompt response. I've checked using the task manager. I made sure to only read and compile the model, and then put the process to sleep to be sure it isn't related to other code.
I've referred the GPU perf checklist, thanks. Do you know what is the memory layout of these 500MB RAM? |
@yaniv5678 Below is my observation for both CPU and GPU. There is no significant memory consumption compared to CPU and GPU. Can you provide the following information for further investigation:
|
|
@yaniv5678 Thanks for the information. I will check this with the relevant team and update you soon. |
Ref. 159902 |
Hi, please try now with latest master version Issue has been partially solved with #28167 Generally, turns out that these kinds of models have lots of small allocations, and the driver's alignment on my Windows machine for allocations is 64 KB, which means that for every small allocation we are taking 64 KB, even if it is for 1 byte. So this is quite wasteful. With this, the alignment is reduced to just 512, because low alignment values would tank the performance. Still, the GPU plugin also has to make OCL context which could take for instance 100MB to 150MB (this highly depends on implementation) so basically RAM usage will always be higher than executing with CPU. This is unavoidable. So now the memory usage should be slightly better, but still not nowhere as good as CPU |
Also no idea why int8 is taking more RAM for you |
Plus according to my tests, model actually should run slighly faster (a few % speedup) with said PR |
Still, we will need to hunt down even more small buffers since I still see many many small allocations with such kind of models, each of them eating 64KB of RAM. I have yet no idea where they come from, will research. Also, I have seen that, in my test machines, driver on Windows wants allocations to be 64KB-aligned, but on Ubuntu I see they are 4KB aligned. You should see better results on Ubuntu because of this driver difference. |
Thank you very much! |
Hi, OCL context takes that much memory because it needs to setup it's internal memory structures, and maybe some caching and stuff in order to improve performance, I am however not perfectly exactly sure what goes into that memory usage. I have also tested it with Nvidia and AMD cards and they have the same thing for OCL context. In fact, IIRC Nvidia actually used quite a lot (300 MB if I remember correctly?), but AMD used less but still quite a lot. For Intel IIRC I got around 100MB to 150MB but it really depends on your environment. Guess we would need to ask driver team. Bear in mind that GPU plugin is technically more complicated since it has to offload and delegate tasks to GPU, compile stuff and deal with the kernel's GPU driver along with backends like OpenCL. CPU plugin does not need any of this. So naturally, there is an overhead for just using the GPU because of this. So we depend on the driver too. No I don't think it duplicates model weights. Sure but I will need to setup stuff on Windows and compile newer version. |
Also it could be that INT8 deberta model is just allocating more small buffers. On a larger model you could see memory usage improvements over FP16 because the buffers are larger, but with a small model like that it all gets eaten up because of driver's memory allocation alignment requirements. A possible solution would be to use subbuffers so we can use our own alignment, and that's exactly what I did on my PR. However, this is just a partial solution since it only does it with shape_info buffers. |
And still, we would require either 256 or 512 byte alignment which is probably significantly more than what the CPU aligns to. Using an alignment of 128 or below would just tank the performance on any GPU, and with no alignment at all I got around 3x slowdown. However with alignment either 256 or 512 I got good memory usage improvements and even slightly improved speed. Basically, because of how GPUs work, we will never be as memory efficient as CPU plugin, but we can get closer |
OpenVINO Version
2024.5.0
Operating System
Windows System
Device used for inference
GPU
Framework
None
Model used
deberta-v3-mini
Issue description
Hi,
I converted deberta-v3-mini to OpenVINO using optimum-cli, with weights compressed to int8. The file size on disk is ~160MB.
And then compiled the model using both pythonic openvino & openvino-rs.
In both scenarios, model took ~500MB of RAM.
When I used inference precision hint of "int8" to my iGPU (intel iris xe, with core i5)
It didn't help, it even took more RAM (around 1.2GB!!)
When I compiled this model to CPU, it only took ~40MB somehow.
Can you help me understand why, and how to decrease RAM usage in GPU case?
Is this a bug?
Thanks!
Step-by-step reproduction
Relevant log output
No response
Issue submission checklist
The text was updated successfully, but these errors were encountered: