-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-synthesis failed in both project and in part7a_bitstream tutorial #1157
Comments
Hi @sdubey11, Array partitioning affects how arrays are stored in FPGA memory. There are various schemes Vivado offers (cyclic, complete, block etc) - but, in hls4ml, we use the complete partitioning scheme, which essentially stores all the elements of the array in registers. By storing all the intermediate results in registers (rather than on-chip memory - BRAM), we can access and propagate the intermediate results faster and therefore, achieve higher throughput and lower latency. However, in general registers on an FPGA, are not meant for storing large arrays, as they are a limited resource and also when used excessively, can cause significant routing complexity and timing closure issues. Therefore, Vivado includes a variable called In the most recent version of hls4ml, we've made it possible for users to modify this value, in case there are more complex designs that might need a larger upper limit. However, there was a bug where the default value of 4,096 wasn't propagated to the VivadoAccelerator backend and therefore, would use the Xilinx Vivado default value of 1,024. This should now be fixed by PR #1160. Can you please check out the branch from the PR and let us know if this fixes your problem? |
Hi @bo3z , Thank you for your reply.
I checked this out. To be sure, When I now run
I'm not sure if this is related to that limit but the notebook I'm running for my model now seems to hang after this. It doesn't crash but I left When I apply this fix to the tutorials, the kernel notebook for part7a_bitstream crashes. Specifically, it crashes at Are these related to the original problem? If not and they are new ones, please let me know if I should open a new issue. Thank you. |
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
Quick summary
Hello, I have been having issues running
hls_model.build(csim=False, export=True, bitfile=True)
for the project I am doing. Pre-synthesis fails due to some limit being exceeded. I'm somewhat new to this so I'm not sure what the origin of the error is, or the optimal solution.Details
I encounter the error when running
hls_model.build(csim=False, export=True, bitfile=True)
.I am loading a trained, pruned, Keras MLP. It's a simple model with one hidden layer with 10 neurons, but the input shape is (10, 400). There have been previous discussions here and here on the same problem. Based on the latter I shrank my model to its current state but realize that the large shape of the input tensor, and given that I'm trying to put this model on a
pynq-z2
, might still cause problems.Steps to Reproduce
To test this I simply opened the tutorials (which I had already gone through, before) and re-ran parts 1-4, then ran part 7a. I changed nothing of substance in any of the notebooks, except in parts 1-4 I changed the use of
XILINX_VITIS
toXILINX_VIVADO
andVitis
toVivado
, where relevant. Otherwise the code is the original, checked-out, code. I still get the error I quoted above, in part 7a (screenshots below). Parts 1-4 ran without issue. I am usinghls4ml==1.0.0
. My Vivado version is2019.2
. The commit hash for the tutorial checkout is29a7f7e7891ddc40c7feb2f9f9d7e116778785c1
.The only other issue that popped up in part 7a was the warning
when running
model = load_model('model_3/KERAS_check_best_model.h5', custom_objects=co)
. I'm not sure if that's relevant, since I haven't changed anything in the part 7a notebook.The tutorial error surprised me since the same error appeared for my own model and the tutorial model, implying the input tensor shape of my own model isn't the origin of the error (unless the same error is being generated by more than one thing).
Additional context
Originally in my personal project, I had a much larger MLP and was getting errors similar to those here. I did perform the suggested fixes, such as changing the
ReuseFactor
, changingStrategy
toResource
, and settingio_type
toio_stream
. However, the issue resolved when I shrank the model. However, then I started getting the error this post is about. I still thought that because of the size of the input tensor was large, that that was the issue. But after running thehls4ml
tutorial out of the box, as is, and getting essentially the same error, I am not sure that is the case. As such, I'm not certain that this (the error thrown) is a bug or I have overlooked something/done something impermissible.My own model is trained with
Keras==2.15.0
. I have also set up a pip venv withhls4ml==1.0.0
and all the necessary libraries and use that for the kernel when I run the notebooks, including for the tutorials. I am running Ubuntu24.04.1 LTS
(not sure if this matters).If this seems to not be a bug and there is a more appropriate forum for this question, please let me know.
Thank you.
The text was updated successfully, but these errors were encountered: