You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to modify the latency benchmark to include more granular buffer access sizes to get a smoother latency curve, but seems I don't understand correctly how the algorithm works.
In particular I modified main control loop to increase the testsize by 1/2 of the previous full 2^n increment:
But as you notice in the figures the latencies don't actually change from the previous full 2^n figure.
Looking at the code in random_read_test I see that you limit the access pattern to a given memory range by simply masking the randomized index with a defined address mask. I of course changed the parameters as above to be able to pass the proper testsize instead of just nbits.
The resulting behaviour should theoretically work but obviously it seems I'm missing something as it doesn't work. As far as I see this shouldn't be an issue of the LCG (I hope). Do you have any input into my modifications or any feedback on other methods to change your random_read_test into accepting test sizes other than 2^n?
The text was updated successfully, but these errors were encountered:
Well, I could have a look if you provided a compilable test branch with this code. Using arbitrary sizes may require rescaling the offset via multiplication rather than masking out the higher bits.
There is just one potential problem with making the implementation more complex. We want to ensure that all the temporary variables from the inner loop are always allocated in CPU registers. If there are any spills to stack, then we need to implement this code in assembly (just like it is done for 32-bit ARM).
I've been trying to modify the latency benchmark to include more granular buffer access sizes to get a smoother latency curve, but seems I don't understand correctly how the algorithm works.
In particular I modified main control loop to increase the testsize by 1/2 of the previous full 2^n increment:
This gives me the supposed increases that I wanted:
But as you notice in the figures the latencies don't actually change from the previous full 2^n figure.
Looking at the code in
random_read_test
I see that you limit the access pattern to a given memory range by simply masking the randomized index with a defined address mask. I of course changed the parameters as above to be able to pass the propertestsize
instead of justnbits
.The resulting behaviour should theoretically work but obviously it seems I'm missing something as it doesn't work. As far as I see this shouldn't be an issue of the LCG (I hope). Do you have any input into my modifications or any feedback on other methods to change your
random_read_test
into accepting test sizes other than 2^n?The text was updated successfully, but these errors were encountered: