Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to get the precise time that each period is played? #118

Open
cxrodgers opened this issue Apr 8, 2023 · 4 comments
Open

Comments

@cxrodgers
Copy link

cxrodgers commented Apr 8, 2023

Hello, I have been using this module for an auditory neuroscience experiment in my research lab, and I'm trying to increase the temporal precision of my results. Basically, I'm running jackclient-python on a raspberry pi with a Hifiberry audio card (all part of the Autopilot project). We play sounds separated by silence, and record auditory responses from hearing regions in the brain to those sounds. This is similar to hearing tests at the audiologist.

To make this experiment work, I would like to know with sub-millisecond precision the exact time that each sound comes out of the speaker. Of course I can us an oscilloscope to measure this, but this is bulky and it seems like there should be some way to get the information I need directly from jack.

I am starting jackd like this:

jackd -P75 -p16 -t2000 -dalsa -dhw:sndrpihifiberry -P -r192000 -n3 -s &

I think this means 3 periods of playback latency. While this call seems to set the length of the period to 16 frames, I think I am actually using periods of 1024 frames (because the parameter blocksize of my jack.Client is 1024), this must be set by the sound card.

Here is some pseudocode for what is going on in my process callback right now:

def process(frames):
    last_frame_time = client.last_frame_time
    frames_since_cycle_start = self.client.frames_since_cycle_start
    now = datetime.datetime.now()

    # write sound to the output ports (not shown...)

I log these three timing variables (last_frame_time, frames_since_cycle_start, and now) for every process call. Using these data, I think I can calculate offline the approximate relationship between frame times and the system clock. That way, I can calculate what time it was on the system clock at the beginning of each period (i.e., when the process callback was called). Finally, I think I can assume that sound comes out of the speaker 3 periods later.

I am looking for guidance, am I thinking about this correctly? If so, then my precision will be limited by the accuracy of frames_since_cycle_start, which I know is only approximate, and the latency between getting that estimate and getting the system clock time. Is there a better way to get the precise time that the sound on each period comes out of the speaker? Maybe there is a way to directly sample the audio clock on the raspberry pi, if I can figure out which pin it is on. Thanks for any tips!

edit: more info, this is what I see when I start jackd, which includes version information and parameter settings

jackdmp 1.9.20
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2016 Grame.
Copyright 2016-2021 Filipe Coelho.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK server starting in realtime mode with priority 75
self-connect-mode is "Don't restrict self connect requests"
creating alsa driver ... hw:sndrpihifiberry|-|1024|3|192000|0|0|nomon|swmeter|soft-mode|32bit
configuring for 192000Hz, period = 1024 frames (5.3 ms), buffer = 3 periods
ALSA: final selected sample format for playback: 32bit integer little-endian
ALSA: use 3 periods for playback
@mgeier
Copy link
Member

mgeier commented Apr 9, 2023

Getting reliable information about time is often quite hard.
If you have access to an oscilloscope, you should definitely use it to check if your assumptions about the API and your code are correct.

AFAIU, client.last_frame_time is the way to get an exact time in the audio callback. I have the feeling that you should not use datetime.now() in there. I that case you don't need client.frames_since_cycle_start either, which seems sketchy anyway.

In the non-audio thread, you should use client.frame_time and compare that to client.last_frame_time from the audio thread.

I'm not sure what exactly you need, can you describe that in more detail?

While this call seems to set the length of the period to 16 frames, I think I am actually using periods of 1024 frames (because the parameter blocksize of my jack.Client is 1024)

This sounds wrong.

The "period" should be equal to the "blocksize". It's the same thing.

It seems like your -p16 sets --port-max, which is probably not what you intended?

I guess you wanted the ALSA backend option -p, a.k.a. --period?

Using the long options surely would reduce potential confusion, but either way you have to use the ALSA options after -dalsa!
This may not be immediately obvious, I certainly learned it the hard way.

I guess you need something like this (I removed a few options where I didn't know what they are supposed to do):

jackd --driver alsa --period 16 --device hw:sndrpihifiberry --rate 192000 --nperiods 3

BTW, did you try to use --nperiods 2? If you hardware supports it, this should give you lower latency.
If not, you'll probably get xruns.

And using -s seems kinda unreliable (I never used it, though). Can't you use --realtime on the Pi?

@cxrodgers
Copy link
Author

Thanks @mgeier for this information!

Re the command line options, I received these values from a colleague, and I now understand that I was misinterpreting them, specifically I didn't understand that '-p' has a different meaning before and after '-dalsa'. In any case, I am using a blocksize/period of 1024 frames, and --nperiods of 3. I think these values are fine for me. I don't actually need particularly low latency, what I need is precise information about when sounds were played (see below).

I'm not sure what exactly you need, can you describe that in more detail?

Sure! In this experiment, I am playing intermittent sounds (~10 ms white noise bursts, repeated a few times a second) and measuring brain responses with a technique called EEG. My goal is to synchronize the two streams within about 0.5 ms. Specifically, for each sound that was played, I need to know which sample in the brain data was taken at the time that the sound started playing.

For playing sounds: I do this with a raspberry pi running jack with a hifiberry amp2 audio output.
For recording EEG: I do this with a chip called an ADS1299. I use a Teensy to get the data off the ADS1299 and send it to a desktop PC. The sampling rate is 16 kHz.

So somehow, I need to know which sample in the EEG corresponds to the time that jack started playing sound. Approach 1 (software): store the time on the Raspberry Pi system clock at which jack played each sound. To do this I need to convert between last_frame_time and clock time. Approach 2 (hardware): If I could find a way to pulse a pin on the raspberry pi on every period/blocksize, then I could sample this signal on one of the digital inputs on my Teensy.

Thus I am wondering what would be the best way to get the clock time that each period is played, or alternatively to set a callback with jack that I could use to pulse a pin every time a period is played.

Getting reliable information about time is often quite hard. If you have access to an oscilloscope, you should definitely use it to check if your assumptions about the API and your code are correct.

Agreed! Thanks for any suggestions about what to try first, and I will use the oscilloscope to verify it.

@mgeier
Copy link
Member

mgeier commented Apr 12, 2023

To do this I need to convert between last_frame_time and clock time.

I'm not sure, but it might be hard to get to a reliable 0.5 ms accuracy with this approach.
I would expect quite a lot of jitter with this approach.

Approach 2 (hardware): If I could find a way to pulse a pin on the raspberry pi on every period/blocksize, then I could sample this signal on one of the digital inputs on my Teensy.

Do you have a spare audio channel?

You could try to generate a second audio signal, wire it to one of the inputs of the Teensy and try to detect it there?
I don't know what voltages and types of signal would work for that.

@cxrodgers
Copy link
Author

Thanks @mgeier for the suggestion! The hifiberry has two audio channels and I'm using both of them already. However, I wonder if I could connect a GPIO as another output from jack, not sure if that's possible, but I'll look into it.

If I get quantitative results about the jitter that I get with my first approach, I will come back and post them here for future reference in case it is useful for others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants