Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritize kernels for layers using all processors #352

Merged
merged 2 commits into from
Aug 29, 2024

Conversation

rotx-eva
Copy link
Contributor

@rotx-eva rotx-eva commented Aug 1, 2024

…ity of "holes" in the kernel map

@rotx-eva rotx-eva marked this pull request as draft August 1, 2024 18:38
@oguzhanbsolak
Copy link
Contributor

I did the regression test. I observed no side-effects; the only difference was the prioritized placement of layers that utilize all processors as expected.

It is worth mentioning that latency was affected negatively for ai85-cifar-100-residual and ai85-resnet experiments. The modification increases layer 12's PreRead cycles. The layer 12 depends on the layer 9's and layer 11's output. Both of these layers are placed in larger offsets with the new modification which might be the reason for this latency increase.

I attached ai85-cifar-100-residual log files below.
latency_new.txt
log_cifar_new.txt
log_cifar.txt
latency.txt

Other than these experiments, I observed no increase/decrease in the latencies, so I think this is a minor effect and PR is good to go.

@rotx-eva rotx-eva marked this pull request as ready for review August 28, 2024 15:24
@MaximGorkem MaximGorkem merged commit e174459 into analogdevicesinc:develop Aug 29, 2024
1 check passed
@MaximGorkem MaximGorkem deleted the feature/kernels branch August 29, 2024 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants