-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
networking ability for large-scale setups #72
Comments
That sounds great! Just to clarify: With As far as I understand your description, there are three more or less independent parts:
Are there some other parts which I missed? None of those is specific to WFS, but I guess that's your targeted reproduction method. The first point is specific to loudspeaker-based renderers, the others should really work with any renderer type. Just to give you an example for a different use case, you might have multiple independent binaural renderers running on one or several computers that all share the same audio scene. ad 1) I wouldn't call the new type of loudspeakers "virtual speakers". There are already several "virtual" things in virtual acoustics and "virtual speakers" already has a different meaning, this would be very confusing. What about "disabled loudspeakers", "dummy loudspeakers", "inactive loudspeakers", "pseudo loudspeakers", "disconnected loudspeakers", ...? You should not limit this to the WFS renderer, I think the LoudspeakerRenderer should deal with that. The functionality may not be implemented for all loudspeaker-based renderers, but at least none of them should break because of your changes. ad 2) I guess you want to use the GUI, file-playing abilities and network interface more or less as they are now, right? And on top of that, you'll need some way to multicast/broadcast network messages to all sub-renderers. ad 3) I don't really have a clue how this should look like. Anyway, feel free to make new issues for different parts of your endeavor. |
regarding the name: i think we (at the iem) call them phantom speakers. we actually use them for something different, but it amounts to the same thing: non-existent speakers that are used to help calculate the speaker-feeds. |
@mgeier, @umlaeute: thanks for the suggestions!
Exactly.
I have to have a look at the reproduction scene validation process. Basically, what I want is to provide every slave renderer with the possibility to distinguish between its own and foreign speakers. It would probably suffice to define speakers, that don't belong to the current instance as "inactive" in the settings.
That was just a term used to be able to describe, what my intention is. I agree, that "phantom speakers" or "pseudo loudspeakers" is more appropriate here.
Thanks for pointing that out! Good starting point!
Yes, that would be great. Headless would also be okay, if run with a DAW.
Yes, although the current aimed at setup is as follows:
Not sure what you mean by that.
Yes. Not sure how to implement that yet.
Yes.
I don't know about multicast yet. I would probably go for a predefined list of slave renderers by IP in the settings first.
I don't know yet (thanks for dropping boost btw!). Extending it most likely.
OSC would be great to have to interface with controlling software (e.g. for building separate GUIs to define movements, etc. such as implemented in WFSCollider).
Similar. I like the term phantom speakers! |
I wouldn't call it "phantom speakers" either. You mentioned "foreign" above, I think that could be an option.
It would probably be nice to have a single "reproduction setup" that describes the whole thing (either the one we are using now or a new one). The different slave instances might then have some additional settings that specify which loudspeakers they should create signals for.
I don't think this would be useful. Each slave instance has to be initialized at some point, and at that point it should be already known which loudspeakers it should use, right? OTOH, it might be useful to switch reproduction methods (e.g. from WFS to VBAP), but with the current architecture that's not possible, since each renderer has it's own executable.
If you want to use the existing SSR also as "master" instance (with its GUI, network interface, file playing abilities, ...), you will have to select some renderer, right? Regarding broadcast/multicast: I don't know much about networking, I hope you come up with something meaningful. |
Why not simple add the option to "disable" some speakers? That's super clear and might also be useful in other contexts. And it avoids potential terminology clashes. |
Updated.
That was my initial idea. What you're describing is what the master instance would get as "reproduction setup". It would be great to see there which loudspeakers belong to what slave instance (not sure how naming should be resolved there... e.g. by IP or hostname).
Thanks for the clarification.
That's basically, what it will boil down to. |
Does the "master" instance really have to know what loudspeaker belongs to which "slave" instance? I think we can start without that, just to keep it simple. If at some point the loudspeaker coordinates should be transmitted over the network, wouldn't it make more sense to send them from the "slave" instances to the "master" instead of the other way? If that doesn't hurt network performance, we could at some point also send the current loudspeaker levels from the "slaves" to the "master", but this really isn't a high-priority feature ... |
First off, I'll try to call them server and client from now on. I think it fits the purpose better.
No, but it would help a great deal to debug a large system, if the server is able to display which speaker belongs to what client. The graphical display is most likely out of the scope of what I'll be able to do, but I'd like to set the foundation work for something that can be extended properly in the future, if possible. |
I'm still not sure if it makes more sense for the "clients" to connect to the "server" or the other way round. Since you chose those names, I assume you are talking about the "server" waiting for connections and the "clients" actively connecting to the "server", right? What should happen if any of the involved computers has to be rebooted? How is the SSR supposed to be started on each of the computers? Can you describe with a bit more detail how your "nested" reproduction setup works? If you want you can create a new page on the wiki: https://github.com/SoundScapeRenderer/ssr/wiki. |
Well, I chose them not because of their proximity to centralized computing, but because compared to master/slave they are a "less conflicting terminology" and more or less symbolize the same (in computing).
It should boot into a predefined environment, where ssr is automatically started, using the configuration shared by all clients and the server.
That obviously depends on the operating system. On Linux based, systemd managed systems it's possible to write service files for it (which should be done at some point anyways, to be able to run ssr headless with elevated scheduling, etc.).
Have a look at this branch, where I started to manipulate the schema (all included reproduction setups validate using xmllint) and added an example for distributed_reproduction_setup. |
I've looked at your example setup and I think it's not the right way to do this. I still think you should do the 3 things I mentioned above separately:
In your example you are throwing point 1 and 2 together, while you should only solve point 1 with it. The problem is that point 2 should work for all renderers, not only for loudspeaker-based ones! The definition of "foreign" loudspeakers should be network-agnostic. For example, if you have a setup with 7 loudspeakers, you could have such a mapping:
... which would map the loudspeakers (as defined in the loudspeaker setup) with numbers 2, 3, 4 and 6 to the local output channels 12, 17, 9 and 10, respectively. All other loudspeakers (i.e. numbers 1, 5 and 7) would be "foreign". The exact syntax could of course be different, e.g. something like This doesn't have to involve host names, port numbers, etc. Does that make sense? |
just a few random thoughts. feel free to ignore them (but please do not chose "foreign speaker" for speakers that are not rendered to; i find this term highly ambiguous - and at best it means the opposite of what i currently understand you want it to mean) What should happen if any of the involved computers has to be rebooted?probably an interesting question from a practical pov, but i don't think that this is SSRs business. i guess the real question is along the lines of: "if part of the renderer suddenly disappears, should the entire system (automatically ?) adjust to the new situation? (e.g. drop from 5th order ambisonics to 2nd order)". my answer is: i don't think so, esp. no automatic action should take place. if the speakers are indeed managed by the "clients" (and they push that information to the "server"), then this should remain an active thing. if a client has changed their speaker configuration, they should actively push their new setup to the server. if they want to completely retract themselves from the rendering system, they would just send an empty speaker list (or simile: the idea is to tell the server which ressources they can offer). if the speakers are managed by the server, they should assume that all speakers ar available. if a client just vanishes, SSR should assume that it will be back online soon. How is the SSR supposed to be started on each of the computers?i think this is definitely out of the scope of SSR. |
Well, I have to start somewhere. Why not with the layout? This definitely is easier for testing and implementing later on.
I don't think so. 1) is being solved by assigning loudspeakers specifically to a client (which additionally is easy to read/write for a user), making the definition of foreign loudspeakers unecessary (and they should be dealt with internally, not within the setup configuration). I don't really do much about 2) yet, as I only map a setup, that indeed tries to deal with the "knowing of the network" already. There's no controlling in there (yet). Do you have a better solution for taking care of network mapping (hostnames, ports), while not breaking the given xml schema?
Why exactly? Is there another case where a foreign loudspeaker would make sense? If the loudspeaker is not rendered on, it will with high certainty not be on the same system, as you could otherwise use it with the same renderer.
Isn't the per host reproduction_setup taking care of that for that specific host?
To be honest: ssr is a pretty complex piece of software and I have to start somewhere to make sense of it. For me this includes working my way through it by following the crums to where things are actually loaded, evaluated, modified, etc.
How are connections between server and clients to be made later on then?
Doesn't the reproduction_setup for each host take care of that? For all of them, the standard rules of a reproduction_setup apply. Which loudspeaker number in the overall setup it is becomes apparent by the sum of all of them within the distributed_reproduction_setup. Why should the mapping be made explicit, on a per host basis or even outside of a reproduction_setup, if that hasn't been the case before? Would you say, that a unifying setup makes more sense, in which hostnames/ports are defined as the properties of the loudspeakers, linear_arrays and circular_arrays or skips?
I'd rather forego the whole terminology and set this up in code only and not as part of the reproduction_setup/ distributed_reproduction_setup definition. Does alien loudspeaker sound better? ;-)
Agreed!
That's what I'm thinking.
Also agreed, but I think it would be good to give examples for startup scripts and definitely include systemd service files at some point. |
I agree. And on top of it it's quite badly designed, if I may say so.
Yes, and I understand that that's annoying, because many things are not at all clear. Let me try to give you an idea of how the configuration is supposed to work: Most of the settings can be configured with a "configuration file": http://ssr.readthedocs.io/en/latest/operation.html#configuration-files. One of those settings allows to specify a file with a so-called "reproduction setup": http://ssr.readthedocs.io/en/latest/renderers.html#reproduction-setups. Examples for such a thing are in data/reproduction_setups/. The implementation of that is in src/loudspeakerrenderer.h, but this is probably not the right place. It should probably not be bound to the loudspeaker renderer. But that's how it is right now. Those are two separate things and I think they should stay two separate things.
Yes, as long as it is about the "reproduction setup". Providing a scene synchronously to multiple renderer instances should IMHO not be part of the "reproduction setup". I see this as just one of multiple hypothetical use cases:
I think all of them should be doable with the same tools. Therefore, it shouldn't be limited to loudspeaker setups.
You are right, the output mapping could actually be part of the "reproduction setup". But each renderer instance would still have to know which part of the whole "reproduction setup" it is supposed to take care of.
Well I think each rendering instance should have an individual "configuration file" and all instances (even the "server", if needed) should use the same "reproduction setup".
I don't know. This information should probably be in the "configuration file" of the "server"? What information does it really need? I think the "reproduction setup" is a purely optional information for the "server".
You are right, I didn't see that.
Yes. The only information a "client" doesn't know from the reproduction setup is: "Which of those loudspeaker groups belongs to me?".
I agree.
You are right, it could stay part of the "reproduction setup". But it doesn't have to.
I don't exactly know what you mean by that, but I think that hostnames/ports should generally not be part of the "reproduction setup" but rather of the "configuration file".
You are probably right, I can't think of a scenario either. I should have said it the other way round: The controlling of multiple SSR instances should be renderer-agnostic. And that's why the network settings shouldn't be part of the "reproduction setup".
No, I just think the network settings shouldn't be part of the "reproduction setup". @umlaeute I agree with all your points. I only asked those questions because I wanted to find out how establishing the connections should work:
I have the feeling that option 1 makes most sense, but option 3 also sounds tempting. |
@mgeier thanks for the input!
Okay, I think this branch then is the best I can come up with so far.
You are right. I think additionally to "knowing that they are in a networked mode", both client and server will have to know about each other from the configuration file.
Tricky! And not always applicable.
I would go for hostnames and ports, but yes, probably nothing else.
Are you sure? I really don't want to get in conflict with a potential current user base, that has to change its setup ;-)
How about: clients and server are started whenever. clients wait for further instructions from the server (after all they are "marionettes"). The server starts polling all clients it knows from the setup, once it is started, but doesn't require all of them to be "up" to trigger them to render.
This however would require a server to also be a client or something like that and reproduction_setups being extended by non-loudspeaker renderers, but it could potentially be done. |
So they have to be started first and are listening for a network connection?
OK, but how are the "latecomers" handled? I guess that could work. So that would be scenario 1 of those I mentioned above?
Exactly!
So why don't you remove them from there? In the simplest case, each client only has to know locally which loudspeakers "belong" to it.
Probably. In the end, both "server" and "client" will be SSR instances, right?
Could be, but I think it isn't necessary.
It could probably be done that way, but it would again be duplication of information and I think it would be unnecessarily complicated. Why not just have a plain list of hostnames and ports? You could even decide to show a loudspeaker setup in the GUI of the "server" which is totally different from the one used in the "clients".
I don't really understand what you are saying. But it sounds like the "server" has to juggle different reproduction setups, which seems unnecessary to me.
Actually, I don't think that a "client" has to know anything about the "server". OTOH, the "server" doesn't really have to know a lot about the "clients" either.
Exactly!
Oh, I think that's a very limiting assumption! This way, it would be impossible to run multiple "clients" on the same computer.
What for?
Well it makes sense to some extent, but it seems unnecessarily complicated while at the same time unnecessarily limiting the possible use cases.
Yes, if it's worth it.
That's life. Things change. We just have to document it properly and tell our users what they have to change. But as I said, the breaking change should be worth it. |
Frequent polling. Yes.
Because so far they were the best solution for handling skips on the clients and again only require one file to configure the clients instead of
Well, it would be nice to have clients to not respond to random gibberish sent to them from other hosts, but I guess server hostname and port can also be sent by the server, if response is needed. |
I think it is a good idea to have only one reproduction setup file that's shared (e.g. by copying or rsyncing it) between all SSR instances.
I'm not saying that I prefer that, it was just a suggestion, without having put much thought into it. What I do prefer is that the controlling of multiple SSR instances via network has nothing to do with the rendering of a partial loudspeaker setup. We are always jumping back and forth between those two topics ...
That's true, but if you are in a closed network, which you probably are with a distributed WFS setup, there shouldn't really be other hosts, right? And we could still add a "server" whitelist to the "client" configurations, if desired. But probably that should be done with other tools like firewalls and stuff? But I think not restricting this stuff makes for more flexible use cases, right? What if you have several alternative "servers"? Does each client have to know about all of them? Also, the current IP interface of the SSR is inherently insecure. I prefer to keep it that way, because then people know that it's their problem to deal with security. We shouldn't add some security features. Either full commitment or nothing. Latter seems to be less effort.
OK, that raises another interesting question: are you talking about UDP? If we are using TCP, the connection would be bi-directional and this would be a non-issue. |
On 2017-03-29 11:17, Matthias Geier wrote:
And we could still add a "server" whitelist to the "client" configurations, if desired. But probably that should be done with other tools like firewalls and stuff?
i totally agree that SSR shouldn't be bothered with securing the internet.
OK, that raises another interesting question: are you talking about UDP?
If we are using TCP, the connection would be bi-directional and this would be a non-issue.
it is a common misconception that UDP doesn't allow bi-directional
communication. it does fine.
e.g. probably the most used protocol in the internet (DNS) is a UDP
protocol - and it surely is bi-directional as a client asks the server
for an IP->name mapping and the server obviously responds to it.
UDP doesn't have a notion of a "session". so your peers won't know if
the other side goes for a break. (which can be a blessing in a realtime
audio setup)
UDP is also packet-based, which means that you can send at most 2^16
bytes in a single atomic message. if you want to transmit more data in
one go (e.g. a largish configfile as text) thinks get a bit trickier
(well, you have to manually take care of them)
|
I guess we have been getting things mixed up in this thread (after all it is actually several topics).
That's what I'm trying to do. I think the jumping back and forth between topics got us a little confused :-/
Completely with you on that.
Yes, OSC most likely (which doesn't necessarily pinpoint it to UDP, as it's also TCP-capable). |
Yes, definitely! You should probably open new issues for the individual topics. Then we can also discuss the TCP vs UDP question ... |
For my thesis I'll work on the networking abilities of ssr-wfs.
To give a little info on the context: this means defining a way to connect several ssr instances over tcp/ip to make them work in a large scale setup (a hardware setup with many channels, that can not be used by a single computer, e.g.
n
machines with HDSPe MADI FX cards working in a cluster to define one large reproduction setup).The large-scale setup aimed at is defined by
x
inputs being bridged to machine1 to n
(every machine has the same audio inputs), while each machine takes care ofy
separate channels.Depending on a source's location in the audio scene, each machine needs to know, if to render the given source (partially).
My current thoughts regarding this are as follows:
1 to n
y
output channels they are responsibleThe following task list will be updated, as I go along. Feel free to comment and leave suggestions!
Some of the features described below might extend into apf.
The text was updated successfully, but these errors were encountered: