-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tabor: redundant waveform upload #795
Comments
Equal waveforms should already have been detected and deduplicated in |
That would have been my expectation, too, but in the case in which we observed this behavior - multiple AWGs connected - it somehow did not happen. Does it explicitly check for duplicate waveforms on this device, or duplicate segments in the whole program? |
What nesting level does the program have? The deduplication should happen based on the sampled waveforms. Here is where: Are you sure the waveforms are equal? Both channels and markers have to be equal for that. |
I am quite certain that the waveform are physically equal, since the hash values are equal later on when it's uploaded (and from PT definition I know they are equal [but just for this channel pair]). |
It seems to me as if - due to the sequence being non-equal on other channels - different |
My error, the deduplication happens based on the Looks like we need another deduplication step after the sampling. |
Greatest difficulty is to not get confused with 0-based and 1-based indexing. I think that the translation to 1-based indexing only happens directly before loading the segments and tables to the instrument but I am not sure. |
Does the hacky approach from #796 have (non-)obvious bugs/drawbacks to be aware of when using it in the meantime? |
When uplading a program which for some reason has multiple segments of equal waveforms in a
TaborProgram
, the_find_place_for_segments_in_memory
-function does not check for unique values but uploads all segments from this batch of segments regardless.It seems to me thast this may be preventable by checking for unique hashes. Is this assumption correct?
The text was updated successfully, but these errors were encountered: