-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rendered sound is thin #109
Comments
Hi Jasper, Great to hear that you are using SSR! Regarding your question 1: It's hard to say from the distance what the problem is, especially because it shouldn't occur at all. Can you send me details on your setup? I can then reverse engineer. My email address is jens_ahrens_chalmers_se. Replace the first underscore with a dot, the second with an @, and the third with a dot. Ad 2) Yes, that's the way to go! You can maybe make it a little more spacious by add a bit of reverb from other directions. But there is not much more that you can do without heavy (and experimental) signal processing. |
The rendered sound can become thin if the WFS prefilter is set too aggressively. Have you created a new prefilter specifically for your loudspeaker setup? What were the settings in the matlab script? |
The default prefilter is for systems with a loudspeaker spacing of approx. 17 cm. That's a typical spacing of a lot of the systems that we are aware of. SSR comes with a Matlab script that allows you to create your own filter. There are 4 parameters to set:
The prefilter attenuates the frequency range between "lower frequency limit" and "aliasing frequency" is a specific way. If the sound is too thin, then simply create a filter with a lower "aliasing frequency" so that less attenuation occurs. The filter length determines the accuracy. Fiddle around with it until you like the result. We'll update the documentation with practical advice on this. |
I have been playing with SSR and surprised how relativly easy it has been to make it work - especially with an Android app!
My use case is small scale dance music parties where we cant to place the stereo output from CDJ/mixer in an ambisonic space as well as move more dramatic sounds around the scene.
What I have noticed with the Binaural and WFS renderer (at least) is that the original stereo source has a much greater depth, and that any/all rendered output is very thin lacking in base, and almost tinny in comparison - please excuse me I am not an audio engineer, there are likely proper terms for this.
So a couple of questions from this:
Can I get the richness and depth of the original stereo source in an ambisonic rendering of the original? If so how please?
What is the best or most appropriate way to place a stereo signal into the space? I have been just placing point the left and right as two point sources and arranging them somewhat randomly left and right of the origin/user. Is there another process that could be used to de construct the stereo signal into an ambisonic field (or whatever it is called) rather than two point sources?
Cheers, guys, loving the simplicity and GUIness of it all 8)
The text was updated successfully, but these errors were encountered: