Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btian1
250 --> 10ms? it is 40 times enlarge? may miss something during sleep? how about 1ms?
above is the comments I added when the change first submitted, my question is: how 250us * 30 = 7.5ms comes from?
is it based on experience value to make total wait time < 10ms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is set by experiments with test and log. Most waiting time is less than 1ms. I don't want to use 1ms since it will miss one schedule period in FW.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What context is this called from ? My expectation is the IPC thread and if so not sure how we can block LL from running ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lgirdwood This is the IPC thread. It doesn't block the LL thread, but it will block processing of IPC messages until the LL timer fires next time, runs the IPC processing (that needs to happen in LL context), acks it as done, and then the IPC thread can continue. Now this adds a mandatory 10ms delay to each such handover from IPC thread to LL thread.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, so we have a timing dependency between LL thread and IPC thread and this is convergence. This need to be spelled out in the inline comments alongside setting a better tries count as 30 means about 7.5 LL ticks. i.e. we need to say > 1.5 LL ticks would be a significant failure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kv2019i , @RanderWang is it better if 150 * 50? to make IPC response more quick?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lgirdwood This was the mechanism added by @lyakh to synchronize pipeline trigger processing with LL thread (so that pipeline state changes are not done async with the pipeline exection). @lgirdwood @btian1 We can improve the logic (or even devise a different way to wait for next ll tick, I think the original concern of trying to avoid unnecessary thread context switches is as valid as ever), but can we do that as a follow-up and merge this plain revert first? This helps with multiple pressing issues (AV sync and this seems to help with multicore scenarios), and we have done significant testing of this version with @RanderWang . If we want a modified new mechanism, that needs to be tested well, pushing the merge out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with a revert, but we need a folloup PR that spells out the relationships with some inline commentary.