-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implementation of OPSI renderer #82
Comments
The canonical method would be:
|
Thanks for the assistance. Here is the link to my fork with branches dbaprenderer and opsirenderer: https://github.com/rowalz/ssr. opsirenderer is the one I was talking about. |
A few conceptual comments from my side (sorry, I haven't looked into the code...): It will probably be more efficient to do the band splitting (highs and lows) before the rendering as there will be fewer channels to filter. But that's not critical, it should work either way. My spontaneous approach would be splitting the signal and them having a VBAP and a WFS renderer in parallel the outputs of which I would connect to the loudspeakers. The tricky part is that you need to secure that both the WFS part and the VBAP-like part cause the exact same delay of the signal. The VBAP renderer causes no algorithmic delay. It only applies gains to the individual loudspeaker signals (this costs one audio frame in terms of time delay). The WFS part is trickier. Firstly, it applies what we termed predelay (http://ssr.readthedocs.io/en/latest/renderers.html?highlight=predelay#wave-field-synthesis-renderer). This is a delay that is applied to everything by default. This is necessary because the rendering of focused sources (virtual sound sources in front of the loudspeakers) require "anticipation" of the signal (i.e. a negative delay). The predelay provides the headroom for that. It can be adjusted as explained in the docs. Secondly, WFS inherently requires delaying. If I remember correctly, the current implementation of the WFS renderer even takes the propagation time of sound in the virtual space into account. I.e., if a virtual source is 10 m behind the loudspeakers, SSR will apply a delay equivalent to the time sound takes to travel 10 m. We have been wanting to provide the option to remove that. In other words, one could subtract the shortest delay that occurs for a specific virtual sound source from all driving signals of that source so that there will always be a loudspeaker with no delay just like in VBAP. I've once started to implement this. But I hadn't understood the multi-threading architecture well enough to come up with a proper solution, and I gave up. Finally, the WFS prefilter adds a bit a delay. By default, SSR uses a linear-phase FIR filter, the delay cause by which is half its length. The default filters are 128 taps long, I think. So the delay caused by them will be 64 samples (actually, it will be 64.5 if I interpret it correctly). |
First of all, sorry for the horrible API. I had a quick look at your code, but I think it makes more sense if I talk about the existing WFS and VBAP renderers before commenting on your code. The only thing I want to say up front is: please don't use non-english variable names and don't write non-english comments! There are of course many ways to tackle implementing an OPSI renderer. General architectureLet me try to shed some light onto the architecture and its ugly API: By default, each MimoProcessor has two "stages" (and corresponding "lists"): All SSR renderers have an additional "stage": Most SSR renderers have only the three aforementioned "stages", if you want to see a more complicated one, have a look at the NFC-HOA renderer.
You'll notice that the WFS renderer has another peculiar class named Now let's look at the two relevant renderers and check we need to know for the OPSI renderer, shall we? The WFS rendererI'm just going through The main The main class also takes care of processing the
The
As mentioned above, the
This will be called by each
The two in the middle are trivial, just a simple multiplication.
Let's skip over this for a second ...
The
OK, back to the The VBAP renderer... coming soon ... I'm making a break here, but I will continue within the next few days. |
So let's continue, shall we? The VBAP rendererI'm following Unlike the main WFS class, this class is not derived from the strange The "process" function of the main class is a bit more complicated than in WFS.
In the process function of the
The
Here again, the process function looks boring, but it kicks off all the audio processing. A possible OPSI rendererAs mentioned above, there are many ways to implement an OPSI renderer, I'm just mentioning a few ideas here. I would mainly follow the WFS renderer (because it seems to be the more complicated one) and would therefore use
Just write into the delay line, move the pre-filter to after the "split".
I guess this could be just a combination of the
Instead of directly "forwarding" the output of the delay line, I think this should do the low-pass filtering for the WFS part and it should probably also do the pre-filtering. In addition to the stuff from the WFS renderer, this should probably provide a second delay time that's slightly shifted to compensate for the pre-filter's group delay (as Jens mentioned above).
Let's be optimistic and say that the splitting in to high-pass and low-pass signal worked. Now we'll have to find a way to combine them again ... Initially, I thought about using a single "combiner" that takes care of both at the same time. It might still be an option, but I have the feeling that it might be easier to use two "combiners". But there is a problem ... And there is another problem ... But I guess if those two things are solved, it should work! Does any of what I said make sense? If you want to follow my suggestions from above, I think the best way forward would be to make the mentioned changes to the WFS part and completely ignore the VBAP part for the moment. |
I thought about it a bit more and I think it is not feasible to implement the IIR filters intermingled with the crossfading. I think it's best to implement the IIR filters after the crossfade is done. Since the filter coefficients don't change, the filtering can be moved to after the crossfade. I suggest the following change to what I've written above: This way, no changes have to be made to the "combiner", but we need an additional audio buffer in the BTW, iterating over the With that, I guess the two problems I mentioned above are solved. |
Thank you for your extensive considerations. I didn't get through them in ne necessary depth unfortunately, because I have a lot of other stuff to do at the moment. Also I decided to spare out the opsi renderer in my bachelor thesis (which was the cause for the implementation originally) due to different reasons. |
For a comparision and evaulation of different rendering algorithms I wanted to implement OPSI renderer (Wittek 2002) to SSR.
The idea is to base it on the WFS renderer (for a start I copied the complete wfsrenderer.h) and to filter the resulting loudspeaker signals with a lowpass biquad (using the
apf::Biquad
-class). The two speakers which are located on the connecting line between virtual source and reference point do net get filtered but panned/weighted with an algorithm similar to VBAP (I just took some usefull parts from the vbaprenderer.h)Because the filtering is applied after the rendering process I tried to implement it in the
OpsiRenderer::RenderFunction
-class insample_type operator()(sample_type in)
. I am not completely sure if it makes sence. However it works and I get the lowpass filtered signal for all of the speakers except the ones which I described before. But there are some very ugly and annoying artefacts like dropouts (up to a few seconds) and crackling.To me it seems there is a offset of n-times buffersize (1024 samples) between the appearance of artefacts but I do not really understand where the buffer is applied.
I could supply the code. Is there a straightforward method to do this or should I just upload it here?
The text was updated successfully, but these errors were encountered: