Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dp: delayed start should apply to the first DP cycle only #8842

Merged
merged 2 commits into from
Feb 12, 2024

Conversation

marcinszkudlinski
Copy link
Contributor

When DP start, copy from DP to LL should be delayed till first
DP deadline time.

In current implementation the cycle counter is set at every
DP scheduler trigger. If DP is - for any reason - scheduled
again before its deadline passes, the counter will be set again
and copying delay time will be too long. In extreme situation
(i.e. if OBS is set to long value when IBS is short - in this case
DP will proceed a short data chunks with a long deadline), it may
lead to permanent stuck in processing.

Copy link
Collaborator

@lyakh lyakh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's fix the printing format. Also, maybe consider renaming zephyr_dp_schedule.c -> zephyr_dp.c for consistency?

src/audio/module_adapter/module_adapter.c Outdated Show resolved Hide resolved
@@ -253,7 +252,6 @@ void scheduler_dp_ll_tick(void *receiver_data, enum notify_id event_type, void *
/* set a deadline for given num of ticks, starting now */
k_thread_deadline_set(pdata->thread_id,
pdata->deadline_clock_ticks);
pdata->ll_cycles_to_deadline = pdata->deadline_ll_cycles;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see, that this re-arming is useless, but it seems it should be harmless, since the dp_startup_delay flag is only set to true once during task initialisation? What am I missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flag may never be set to false if re-arming occurs too early. That hurts.

In case of theoretically impossible event of LL-DP
copy failure, an error message should be issued

Message is better than assert - as stated this should
never happen, but if it happens its better to have a
glitch + message than a crash

Signed-off-by: Marcin Szkudlinski <[email protected]>
When DP start, copy from DP to LL should be delayed till first
DP deadline time.

In current implementation the cycle counter is set at every
DP scheduler trigger. If DP is - for any reason - scheduled
again before its deadline passes, the counter will be set again
and copying delay time will be too long. In extreme situation
(i.e. if OBS is set to long value when IBS is short - in this case
DP will proceed a short data chunks with a long deadline), it may
lead to permanent stuck in processing.

Signed-off-by: Marcin Szkudlinski <[email protected]>
@marcinszkudlinski
Copy link
Contributor Author

maybe consider renaming zephyr_dp_schedule.c -> zephyr_dp.c for consistency?

This file is implementing SOF-like abstraction called "scheduler":

static struct scheduler_ops schedule_dp_ops = {
	.schedule_task		= scheduler_dp_task_shedule,
	.schedule_task_cancel	= scheduler_dp_task_cancel,
	.schedule_task_free	= scheduler_dp_task_free,
};

so the filename is rather accurate in my opinion

Copy link
Collaborator

@lyakh lyakh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd still think about renaming the file, but that's just me, and it isn't named in this PR anyway

@kv2019i kv2019i merged commit 7c0c8a4 into thesofproject:main Feb 12, 2024
40 of 44 checks passed
@marcinszkudlinski marcinszkudlinski deleted the dp-delayed-start-fix branch February 13, 2024 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants