Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

networking ability for large-scale setups #72

Open
2 of 4 tasks
dvzrv opened this issue Mar 12, 2017 · 21 comments
Open
2 of 4 tasks

networking ability for large-scale setups #72

dvzrv opened this issue Mar 12, 2017 · 21 comments

Comments

@dvzrv
Copy link
Contributor

dvzrv commented Mar 12, 2017

For my thesis I'll work on the networking abilities of ssr-wfs.
To give a little info on the context: this means defining a way to connect several ssr instances over tcp/ip to make them work in a large scale setup (a hardware setup with many channels, that can not be used by a single computer, e.g. n machines with HDSPe MADI FX cards working in a cluster to define one large reproduction setup).
The large-scale setup aimed at is defined by x inputs being bridged to machine 1 to n (every machine has the same audio inputs), while each machine takes care of y separate channels.
Depending on a source's location in the audio scene, each machine needs to know, if to render the given source (partially).
My current thoughts regarding this are as follows:

  • adding a ssr-wfs server instance, that serves as an interface to the client machines 1 to n
  • providing all ssr-wfs client instances with the complete reproduction setup, that clearly states for which y output channels they are responsible

The following task list will be updated, as I go along. Feel free to comment and leave suggestions!
Some of the features described below might extend into apf.

  • extend configuration format to define client assigned loudspeakers (configuration.cpp)
  • implementing alien loudspeakers (loudspeakers that a given instance of ssr will not render on).
  • implement logic to disable rendering on alien loudspeakers (loudspeakerrenderer.h), while taking all available speakers into account.
  • extend networking capabilities to allow ssr -> ssr message sending and information retrieval over OSC.
@mgeier
Copy link
Member

mgeier commented Mar 13, 2017

That sounds great!

Just to clarify: With 1 - n you mean "one to n" and not "one minus n", right?

As far as I understand your description, there are three more or less independent parts:

  1. a new kind of loudspeaker
  2. a thing that is able to control multiple renderers
  3. a network protocol that includes timing information

Are there some other parts which I missed?

None of those is specific to WFS, but I guess that's your targeted reproduction method.
Whatever you come up with should also be easily implementable in VBAP, probably for others, too.

The first point is specific to loudspeaker-based renderers, the others should really work with any renderer type. Just to give you an example for a different use case, you might have multiple independent binaural renderers running on one or several computers that all share the same audio scene.

ad 1)

I wouldn't call the new type of loudspeakers "virtual speakers". There are already several "virtual" things in virtual acoustics and "virtual speakers" already has a different meaning, this would be very confusing. What about "disabled loudspeakers", "dummy loudspeakers", "inactive loudspeakers", "pseudo loudspeakers", "disconnected loudspeakers", ...?

You should not limit this to the WFS renderer, I think the LoudspeakerRenderer should deal with that. The functionality may not be implemented for all loudspeaker-based renderers, but at least none of them should break because of your changes.

ad 2)

I guess you want to use the GUI, file-playing abilities and network interface more or less as they are now, right?
You would like something that looks like an SSR instance, but without the rendering part.
You would still like to be able to open sound files, right?
You probably want a "dummy renderer" (or a "pass-through renderer") that has an output channel for each of its sources (be it live sources or sound files)?
You probably need access to the audio backend also to get some timing information?
You still want to be able to control this "central" unit via network as it is used now, right?

And on top of that, you'll need some way to multicast/broadcast network messages to all sub-renderers.

ad 3)

I don't really have a clue how this should look like.
Should this extend the current network interface or should this be an independent interface?
There was some recent discussion about the network interface, you should have a look at https://github.com/SoundScapeRenderer/ssr/wiki/SSR-IP-OSC-interface.
You sure have already thought about this, could you share your thoughts?
Probably in a new issue?

Anyway, feel free to make new issues for different parts of your endeavor.

@umlaeute
Copy link
Contributor

umlaeute commented Mar 14, 2017

regarding the name: i think we (at the iem) call them phantom speakers.

we actually use them for something different, but it amounts to the same thing: non-existent speakers that are used to help calculate the speaker-feeds.
btw, this is on HOA systems, so i don't think that any solution should be WFS only. instead make it applicabable on any backend (it's probably little use for binaural rendering; but then...you never know)

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 14, 2017

@mgeier, @umlaeute: thanks for the suggestions!

With 1 - n you mean "one to n" and not "one minus n", right?

Exactly.

The first point is specific to loudspeaker-based renderers, the others should really work with any renderer type.

I have to have a look at the reproduction scene validation process. Basically, what I want is to provide every slave renderer with the possibility to distinguish between its own and foreign speakers. It would probably suffice to define speakers, that don't belong to the current instance as "inactive" in the settings.
The settings file will have to be different for every slave instance anyways.
Another interesting idea would be to push all settings through the master instance (which I think in ssr's current state is not possible).

I wouldn't call the new type of loudspeakers "virtual speakers".

That was just a term used to be able to describe, what my intention is. I agree, that "phantom speakers" or "pseudo loudspeakers" is more appropriate here.

You should not limit this to the WFS renderer, I think the LoudspeakerRenderer should deal with that. The functionality may not be implemented for all loudspeaker-based renderers, but at least none of them should break because of your changes.

Thanks for pointing that out! Good starting point!
Will try my best.

I guess you want to use the GUI, file-playing abilities and network interface more or less as they are now, right?

Yes, that would be great. Headless would also be okay, if run with a DAW.

You would like something that looks like an SSR instance, but without the rendering part.
You would still like to be able to open sound files, right?

Yes, although the current aimed at setup is as follows:
A DAW plays back upto x channels (i.e. live sources).
All outputs of the master instance are bridged to the inputs of all slave instances (1 to 1 mapping).

You probably want a "dummy renderer" (or a "pass-through renderer") that has an output channel for each of its sources (be it live sources or sound files)?

Not sure what you mean by that.

You probably need access to the audio backend also to get some timing information?

Yes. Not sure how to implement that yet.

You still want to be able to control this "central" unit via network as it is used now, right?

Yes.

And on top of that, you'll need some way to multicast/broadcast network messages to all sub-renderers.

I don't know about multicast yet. I would probably go for a predefined list of slave renderers by IP in the settings first.
In any way, that's another point on the list for the settings validation process.
From a security perspective I would not use multicast, as the setup is a one to many and many to one relation, that can be made more safe and robust using firewall settings on all machines, when in an IP based setup (e.g. block all traffic to the IP interface, if not coming from master or slave, depending on direction).

Should this extend the current network interface or should this be an independent interface?

I don't know yet (thanks for dropping boost btw!). Extending it most likely.
What do you think would make more sense?

There was some recent discussion about the network interface, you should have a look at https://github.com/SoundScapeRenderer/ssr/wiki/SSR-IP-OSC-interface.
You sure have already thought about this, could you share your thoughts?
Probably in a new issue?

OSC would be great to have to interface with controlling software (e.g. for building separate GUIs to define movements, etc. such as implemented in WFSCollider).
It however has the same bottleneck issues, that the XML interface has.
A subscription method might work around this issue, as I think that a rate limiting would depend on the receipients ressources (and thus non-trivial to setup).
I suspect networking has to be tested extensively to make sure, that it will not go awry, once you send multiple source movement commands.

we actually use them for something different, but it amounts to the same thing: non-existent speakers that are used to help calculate the speaker-feeds.

Similar. I like the term phantom speakers!

@mgeier
Copy link
Member

mgeier commented Mar 15, 2017

I wouldn't call it "phantom speakers" either.
First of all, I prefer to call them "loudspeakers" instead of "speakers", and second, "phantom source" is already a fixed term and calling something else "phantom" might be misleading, especially if it doesn't work in a similar way. I think "phantom" is as bad as "virtual".

You mentioned "foreign" above, I think that could be an option.

The settings file will have to be different for every slave instance anyways.

It would probably be nice to have a single "reproduction setup" that describes the whole thing (either the one we are using now or a new one). The different slave instances might then have some additional settings that specify which loudspeakers they should create signals for.

Another interesting idea would be to push all settings through the master instance (which I think in ssr's current state is not possible).

I don't think this would be useful. Each slave instance has to be initialized at some point, and at that point it should be already known which loudspeakers it should use, right?

OTOH, it might be useful to switch reproduction methods (e.g. from WFS to VBAP), but with the current architecture that's not possible, since each renderer has it's own executable.
At some point this might be an interesting feature but I think it is out of scope for your project, right?

All outputs of the master instance are bridged to the inputs of all slave instances (1 to 1 mapping).

You probably want a "dummy renderer" (or a "pass-through renderer") that has an output channel for each of its sources (be it live sources or sound files)?

Not sure what you mean by that.

If you want to use the existing SSR also as "master" instance (with its GUI, network interface, file playing abilities, ...), you will have to select some renderer, right?
As you are saying, the outputs of the "master" instance are not loudspeaker signals but the source signals for the "slave" instances.
So I guess you will have to implement a new type of renderer that simply passes through all signals to its outputs, right?
And this might also be the place where you get access to the audio timing information?

Regarding broadcast/multicast: I don't know much about networking, I hope you come up with something meaningful.
The current network interface is quite limited, but I don't know if it makes more sense to extend it or to create a new one. I hope we'll see along the way what's best.

@chohner
Copy link
Contributor

chohner commented Mar 15, 2017

Why not simple add the option to "disable" some speakers? That's super clear and might also be useful in other contexts. And it avoids potential terminology clashes.

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 16, 2017

You mentioned "foreign" above, I think that could be an option.

Updated.

It would probably be nice to have a single "reproduction setup" that describes the whole thing (either the one we are using now or a new one). The different slave instances might then have some additional settings that specify which loudspeakers they should create signals for.

That was my initial idea. What you're describing is what the master instance would get as "reproduction setup". It would be great to see there which loudspeakers belong to what slave instance (not sure how naming should be resolved there... e.g. by IP or hostname).
Each slave instance should have a clear picture of the whole "reproduction setup", too, to properly account for partial rendering on some loudspeakers (e.g. the area between two or more slave renderers).

So I guess you will have to implement a new type of renderer that simply passes through all signals to its outputs, right?
And this might also be the place where you get access to the audio timing information?

Thanks for the clarification.
Yes, I suppose that's the case.

Why not simple add the option to "disable" some speakers? That's super clear and might also be useful in other contexts. And it avoids potential terminology clashes.

That's basically, what it will boil down to.
Defining the loudspeaker as "someone else's" (i.e. foreign loudspeaker), while still being a part of the whole setup is more clear though (especially in regard to above mentioned rendering between two slave renderers).

@mgeier
Copy link
Member

mgeier commented Mar 20, 2017

Does the "master" instance really have to know what loudspeaker belongs to which "slave" instance?

I think we can start without that, just to keep it simple.

If at some point the loudspeaker coordinates should be transmitted over the network, wouldn't it make more sense to send them from the "slave" instances to the "master" instead of the other way?
After all, the "master" won't be able to move the loudspeakers!

If that doesn't hurt network performance, we could at some point also send the current loudspeaker levels from the "slaves" to the "master", but this really isn't a high-priority feature ...

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 20, 2017

Does the "master" instance really have to know what loudspeaker belongs to which "slave" instance?

First off, I'll try to call them server and client from now on. I think it fits the purpose better.
Secondly, I'm currently working on a "nested" reproduction setup, that can display the whole distributed setup. This would have the benefit of having just one setup file (instead of n+1), that can be reused on every client, as well as the server. Also, this doesn't require introducing a new type to the definition, as each loudspeaker would explicitely belong to a certain client.
Speaking of the clients: I think it would be great (also in regard to whatever network interface will be used for communication) to make the clients available by hostname (which can be checked by each with 'gethostname') in the configuration file (that can be an attribute of the client definition, which itself holds a reproduction setup each).
How does that sound?

After all, the "master" won't be able to move the loudspeakers!

No, but it would help a great deal to debug a large system, if the server is able to display which speaker belongs to what client. The graphical display is most likely out of the scope of what I'll be able to do, but I'd like to set the foundation work for something that can be extended properly in the future, if possible.

@mgeier
Copy link
Member

mgeier commented Mar 22, 2017

I'm still not sure if it makes more sense for the "clients" to connect to the "server" or the other way round. Since you chose those names, I assume you are talking about the "server" waiting for connections and the "clients" actively connecting to the "server", right?

What should happen if any of the involved computers has to be rebooted?

How is the SSR supposed to be started on each of the computers?

Can you describe with a bit more detail how your "nested" reproduction setup works?

If you want you can create a new page on the wiki: https://github.com/SoundScapeRenderer/ssr/wiki.

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 22, 2017

Since you chose those names, I assume you are talking about the "server" waiting for connections and the "clients" actively connecting to the "server", right?

Well, I chose them not because of their proximity to centralized computing, but because compared to master/slave they are a "less conflicting terminology" and more or less symbolize the same (in computing).
The server should send out information about the sources (e.g. position, volume, jack transport start/stop, timing) to all clients by hostname.
All clients in return should notify the server of their change in state (e.g. ssr starting/running, ssr stopping, jack transport start/stop).
If you compare it to e-mail, this is probably not the most typical client-server structure, because here the clients are rather marionettes (then again, a setup like that is not really typical either ;-) ).

What should happen if any of the involved computers has to be rebooted?

It should boot into a predefined environment, where ssr is automatically started, using the configuration shared by all clients and the server.
It would be great, if the computer could just flawlessly get back to rendering, but I doubt that that is possible.

How is the SSR supposed to be started on each of the computers?

That obviously depends on the operating system. On Linux based, systemd managed systems it's possible to write service files for it (which should be done at some point anyways, to be able to run ssr headless with elevated scheduling, etc.).
MacOSX has automation scripts for sessions (stuff gets started when users are - automatically - logged in).
Windows has startup jobs, but honestly, I haven't used that kind of system in a long time. YMMV.
That's for the "autostarting" after reboot.
I guess starting ssr on demand could be triggered by a script (e.g. over ssh), otherwise an idle state would have to be implemented, that could take care of "waiting for input". Not sure that's a good idea though, as it would mean opening a whole other can of worms.
The script version works well on Linux/MacOSX, but again, I have no clue what to choose on Windows for that, if it's even an option.

Can you describe with a bit more detail how your "nested" reproduction setup works?

Have a look at this branch, where I started to manipulate the schema (all included reproduction setups validate using xmllint) and added an example for distributed_reproduction_setup.

@mgeier
Copy link
Member

mgeier commented Mar 23, 2017

I've looked at your example setup and I think it's not the right way to do this.

I still think you should do the 3 things I mentioned above separately:

  1. a new kind of loudspeaker
  2. a thing that is able to control multiple renderers
  3. a network protocol that includes timing information

In your example you are throwing point 1 and 2 together, while you should only solve point 1 with it.

The problem is that point 2 should work for all renderers, not only for loudspeaker-based ones!

The definition of "foreign" loudspeakers should be network-agnostic.
The only thing you really need is a mapping from certain loudspeakers of a given loudspeaker setup to certain output channels on the local hardware.
IMHO it would be best to keep (for now) the description of the setup as-is and create a separate piece of information (might be in the configuration file or each instance?) with an output mapping.

For example, if you have a setup with 7 loudspeakers, you could have such a mapping:

2 -> 12
3 -> 17
4 -> 9
6 -> 10

... which would map the loudspeakers (as defined in the loudspeaker setup) with numbers 2, 3, 4 and 6 to the local output channels 12, 17, 9 and 10, respectively.

All other loudspeakers (i.e. numbers 1, 5 and 7) would be "foreign".

The exact syntax could of course be different, e.g. something like 0 12 17 9 0 10 (with a final implied 0).

This doesn't have to involve host names, port numbers, etc.
Instead, it should contain the relevant information (where to connect them to) which isn't available in your example.

Does that make sense?

@umlaeute
Copy link
Contributor

just a few random thoughts.

feel free to ignore them (but please do not chose "foreign speaker" for speakers that are not rendered to; i find this term highly ambiguous - and at best it means the opposite of what i currently understand you want it to mean)

What should happen if any of the involved computers has to be rebooted?

probably an interesting question from a practical pov, but i don't think that this is SSRs business.
the question seems to be similar to "what happens if my beloved speaker#4 dies"?.

i guess the real question is along the lines of: "if part of the renderer suddenly disappears, should the entire system (automatically ?) adjust to the new situation? (e.g. drop from 5th order ambisonics to 2nd order)".

my answer is: i don't think so, esp. no automatic action should take place.

if the speakers are indeed managed by the "clients" (and they push that information to the "server"), then this should remain an active thing. if a client has changed their speaker configuration, they should actively push their new setup to the server. if they want to completely retract themselves from the rendering system, they would just send an empty speaker list (or simile: the idea is to tell the server which ressources they can offer).

if the speakers are managed by the server, they should assume that all speakers ar available.
(though i think that having the speakers managed by the clients is probably a better approach)

if a client just vanishes, SSR should assume that it will be back online soon.

How is the SSR supposed to be started on each of the computers?

i think this is definitely out of the scope of SSR.

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 23, 2017

I still think you should do the 3 things I mentioned above separately:

Well, I have to start somewhere. Why not with the layout? This definitely is easier for testing and implementing later on.

In your example you are throwing point 1 and 2 together, while you should only solve point 1 with it.

I don't think so. 1) is being solved by assigning loudspeakers specifically to a client (which additionally is easy to read/write for a user), making the definition of foreign loudspeakers unecessary (and they should be dealt with internally, not within the setup configuration).

I don't really do much about 2) yet, as I only map a setup, that indeed tries to deal with the "knowing of the network" already. There's no controlling in there (yet).
The hostname's port probably doesn't have to be mandatory though, with a hardcoded default.

Do you have a better solution for taking care of network mapping (hostnames, ports), while not breaking the given xml schema?

The definition of "foreign" loudspeakers should be network-agnostic.

Why exactly? Is there another case where a foreign loudspeaker would make sense? If the loudspeaker is not rendered on, it will with high certainty not be on the same system, as you could otherwise use it with the same renderer.
I just can't figure out a scenario, where it might be needed locally.

The only thing you really need is a mapping from certain loudspeakers of a given loudspeaker setup to certain output channels on the local hardware.

Isn't the per host reproduction_setup taking care of that for that specific host?
I agree that changes have to be made to the way the setup files are loaded and evaluated to also add the foreign loudspeakers (from the other hosts reproduction_setups) for each host, if a distributed_reproduction_setup is loaded, but I don't see that being a problem at all.

IMHO it would be best to keep (for now) the description of the setup as-is and create a separate piece of information (might be in the configuration file or each instance?) with an output mapping.

To be honest: ssr is a pretty complex piece of software and I have to start somewhere to make sense of it. For me this includes working my way through it by following the crums to where things are actually loaded, evaluated, modified, etc.
So, for me, extending the xml schema makes sense, as it is a first step to map further steps in the code (everything starts with a definition).
Moving the output mapping to a separate file seems unnecessary clutter to me, when the reproduction_setup is there for defining said mapping on a host (currently only localhost). Why move it somewhere else?
Obviously the way reproduction_setup files are evaluated would need to be extended for distributed_reproduction_setup configurations, but I don't think that should be an issue.
On top: For the user your approach would mean editing files in many places, which in turn leads to more sources for errors. Having a single reproduction_setup/ distributed_reproduction_setup file is easier to understand, less error prone and more easily transferable.

This doesn't have to involve host names, port numbers, etc.

How are connections between server and clients to be made later on then?
Should they rather go into a separate definition for - say - network_setup? That can also be established.

Instead, it should contain the relevant information (where to connect them to) which isn't available in your example.

Doesn't the reproduction_setup for each host take care of that? For all of them, the standard rules of a reproduction_setup apply. Which loudspeaker number in the overall setup it is becomes apparent by the sum of all of them within the distributed_reproduction_setup.
This, as mentioned before, would need to be implemented properly in code, while loading a setup file, but it is not necessary to schema-define a foreign_loadspeaker this way, as it is implicit due to the schema.

Why should the mapping be made explicit, on a per host basis or even outside of a reproduction_setup, if that hasn't been the case before?

Would you say, that a unifying setup makes more sense, in which hostnames/ports are defined as the properties of the loudspeakers, linear_arrays and circular_arrays or skips?
This would be less readable and more complex.

but please do not chose "foreign speaker" for speakers that are not rendered to; i find this term highly ambiguous - and at best it means the opposite of what i currently understand you want it to mean

I'd rather forego the whole terminology and set this up in code only and not as part of the reproduction_setup/ distributed_reproduction_setup definition. Does alien loudspeaker sound better? ;-)

no automatic action should take place.

Agreed!

(though i think that having the speakers managed by the clients is probably a better approach)

That's what I'm thinking.
I don't believe, that a highly dynamic setup for the ssr makes much sense though, in which the clients are allowed to change their hardware capabilities during runtime.

i think this is definitely out of the scope of SSR.

Also agreed, but I think it would be good to give examples for startup scripts and definitely include systemd service files at some point.

@mgeier
Copy link
Member

mgeier commented Mar 25, 2017

To be honest: ssr is a pretty complex piece of software and I have to start somewhere to make sense of it.

I agree. And on top of it it's quite badly designed, if I may say so.

For me this includes working my way through it by following the crums to where things are actually loaded, evaluated, modified, etc.

Yes, and I understand that that's annoying, because many things are not at all clear.
There is a lot of legacy code that should have been updated years ago, and so on and so on.

Let me try to give you an idea of how the configuration is supposed to work:

Most of the settings can be configured with a "configuration file": http://ssr.readthedocs.io/en/latest/operation.html#configuration-files.
There is also an example configuration file, which is worth reading through: data/ssr.conf.example.
Most of those settings can also be set with command line options (which can be shown with ssr-wfs --help). The implementation is located in src/configuration.cpp.

One of those settings allows to specify a file with a so-called "reproduction setup": http://ssr.readthedocs.io/en/latest/renderers.html#reproduction-setups. Examples for such a thing are in data/reproduction_setups/. The implementation of that is in src/loudspeakerrenderer.h, but this is probably not the right place. It should probably not be bound to the loudspeaker renderer. But that's how it is right now.

Those are two separate things and I think they should stay two separate things.
However, there are currently some things in the "configuration file" that should probably be moved to the "reproduction setup". Namely, that's the WFS prefilter and the the HRIR files.
Currently, a "reproduction setup" seems to be solely targeted towards loudspeaker setups, but I think it might be worthwhile to create "reproduction setups" for binaural rendering, too, at some point. But that's probably a different discussion ...

So, for me, extending the xml schema makes sense, as it is a first step to map further steps in the code (everything starts with a definition).

Yes, as long as it is about the "reproduction setup".

Providing a scene synchronously to multiple renderer instances should IMHO not be part of the "reproduction setup".

I see this as just one of multiple hypothetical use cases:

  • rendering of a scene on a cluster of renderers

  • presentation of a scene by means of a loudspeaker setup, but at the same time a binaural live broadcast of the same scene

    • this could even be extended to the presentation of a scene on a cluster of renderers and at the same time a 5.1 VBAP as well as a binaural broadcast.
  • presentation of the same (interactive) scene to multiple headphones with individual head (and probably even position) tracking

I think all of them should be doable with the same tools. Therefore, it shouldn't be limited to loudspeaker setups.

Moving the output mapping to a separate file seems unnecessary clutter to me, when the reproduction_setup is there for defining said mapping on a host (currently only localhost). Why move it somewhere else?

You are right, the output mapping could actually be part of the "reproduction setup".

But each renderer instance would still have to know which part of the whole "reproduction setup" it is supposed to take care of.
I think having an individual "reproduction setup" per SSR instance isn't very practical, therefore this information could be part of the "configuration file", which will be potentially different for each renderer instance.

On top: For the user your approach would mean editing files in many places, which in turn leads to more sources for errors. Having a single reproduction_setup/ distributed_reproduction_setup file is easier to understand, less error prone and more easily transferable.

Well I think each rendering instance should have an individual "configuration file" and all instances (even the "server", if needed) should use the same "reproduction setup".

This doesn't have to involve host names, port numbers, etc.

How are connections between server and clients to be made later on then?
Should they rather go into a separate definition for - say - network_setup? That can also be established.

I don't know. This information should probably be in the "configuration file" of the "server"?
At some point this might even be done with Zerconf or something?

What information does it really need?
A list of IP addresses and ports?

I think the "reproduction setup" is a purely optional information for the "server".
It would sure be nice to show the loudspeakers in the GUI and probably even show their current levels, but none of this should be strictly necessary for the whole system to run.

Instead, it should contain the relevant information (where to connect them to) which isn't available in your example.

Doesn't the reproduction_setup for each host take care of that?

You are right, I didn't see that.
That would work fine.

For all of them, the standard rules of a reproduction_setup apply. Which loudspeaker number in the overall setup it is becomes apparent by the sum of all of them within the distributed_reproduction_setup.

Yes.

The only information a "client" doesn't know from the reproduction setup is: "Which of those loudspeaker groups belongs to me?".

This, as mentioned before, would need to be implemented properly in code, while loading a setup file, but it is not necessary to schema-define a foreign_loadspeaker this way, as it is implicit due to the schema.

I agree.

Why should the mapping be made explicit, on a per host basis or even outside of a reproduction_setup, if that hasn't been the case before?

You are right, it could stay part of the "reproduction setup". But it doesn't have to.

Would you say, that a unifying setup makes more sense, in which hostnames/ports are defined as the properties of the loudspeakers, linear_arrays and circular_arrays or skips?
This would be less readable and more complex.

I don't exactly know what you mean by that, but I think that hostnames/ports should generally not be part of the "reproduction setup" but rather of the "configuration file".

The definition of "foreign" loudspeakers should be network-agnostic.

Why exactly? Is there another case where a foreign loudspeaker would make sense? If the loudspeaker is not rendered on, it will with high certainty not be on the same system, as you could otherwise use it with the same renderer.
I just can't figure out a scenario, where it might be needed locally.

You are probably right, I can't think of a scenario either.
The only arguments I have is cleanness and separation of concerns, which can be easily overruled in favor of practicality.

I should have said it the other way round:

The controlling of multiple SSR instances should be renderer-agnostic.

And that's why the network settings shouldn't be part of the "reproduction setup".

Do you have a better solution for taking care of network mapping (hostnames, ports), while not breaking the given xml schema?

No, I just think the network settings shouldn't be part of the "reproduction setup".
And breaking the current XML Schema isn't a problem at all, feel free to do that!
You can even come up with a completely new, non-XML-based way of specifying the "reproduction setup", if you want.

@umlaeute I agree with all your points. I only asked those questions because I wanted to find out how establishing the connections should work:

  1. The "clients" are started first (or basically always running) and wait for a network connection. The "server" is started last and tries to connect to all "clients".

  2. The "server" is started first. Whenever a "client" is started, it tries to connect to the "server".

  3. Something else.

I have the feeling that option 1 makes most sense, but option 3 also sounds tempting.
I don't think that option 2 is good, but the names "server" and "client" would at least make sense there.

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 25, 2017

@mgeier thanks for the input!

No, I just think the network settings shouldn't be part of the "reproduction setup".

Okay, I think this branch then is the best I can come up with so far.
It makes hostname an optional attribute to loudspeaker, linear_array, circular_array and skip.
This way all clients will know by hostname, which loudspeakers belong to them and skips are always local to the client, too (as globally they wouldn't make sense). Doing this on an IP basis is more complicated I think, as a client can have multiple network interfaces, IPs, etc.
I'll figure something out for the configuration of clients and server, so they'll know by setting (e.g. "network-mode"), that they have to expect a hostname attribute in the reproduction_setup once they load it.

I don't know. This information should probably be in the "configuration file" of the "server"?

You are right. I think additionally to "knowing that they are in a networked mode", both client and server will have to know about each other from the configuration file.
The clients are however mentioned by hostname in the reproduction_setup, so the server could potentially derive them from it, too, without the need of providing them in the configuration file at all (I know this breaks with the separation again, but would be very convenient, given the assumption of a standard port to communicate with clients).
For clients however that would not work and they need a server_name in their configuration.
Does that make sense?

At some point this might even be done with Zerconf or something?

Tricky! And not always applicable.
I would definitely go for a static setup solution first.

What information does it really need?
A list of IP addresses and ports?

I would go for hostnames and ports, but yes, probably nothing else.

And breaking the current XML Schema isn't a problem at all, feel free to do that!

Are you sure? I really don't want to get in conflict with a potential current user base, that has to change its setup ;-)

  1. The "clients" are started first (or basically always running) and wait for a network connection. The "server" is started last and tries to connect to all "clients".

  2. The "server" is started first. Whenever a "client" is started, it tries to connect to the "server".

3.Something else.

How about: clients and server are started whenever. clients wait for further instructions from the server (after all they are "marionettes"). The server starts polling all clients it knows from the setup, once it is started, but doesn't require all of them to be "up" to trigger them to render.
Alternatively, all of this also works with a list of clients provided in the server configuration file (but it would be redundant information, as clients are mentioned in the reproduction_setup anyways, to state ownership of loudspeakers).

I see this as just one of multiple hypothetical use cases:

  • rendering of a scene on a cluster of renderers

  • presentation of a scene by means of a loudspeaker setup, but at the same time a binaural live broadcast of the same scene

  • this could even be extended to the presentation of a scene on a cluster of renderers and at the same time a 5.1 VBAP as well as a binaural broadcast.

  • presentation of the same (interactive) scene to multiple headphones with individual head (and probably even position) tracking

I think all of them should be doable with the same tools. Therefore, it shouldn't be limited to loudspeaker setups.

This however would require a server to also be a client or something like that and reproduction_setups being extended by non-loudspeaker renderers, but it could potentially be done.
If I imagine it as a set of different clusters with the proposed "hostname as loudspeaker attribute", all clients should be good to go from the start.
The server in return would hold an array of reproduction_setups to drive (which still wouldn't break with my idea to derive the clients from the reproduction_setup... it just requires the server to be able to do that from n reproduction_setups and keeping them separate).
Hmm, the term "marionette" feels more and more appropriate for the client :>

@mgeier
Copy link
Member

mgeier commented Mar 27, 2017

How about: clients and server are started whenever. clients wait for further instructions from the server (after all they are "marionettes").

So they have to be started first and are listening for a network connection?

The server starts polling all clients it knows from the setup, once it is started, but doesn't require all of them to be "up" to trigger them to render.

OK, but how are the "latecomers" handled?
Does the "server" just repeatedly try to connect in a certain interval?

I guess that could work.

So that would be scenario 1 of those I mentioned above?

Alternatively, all of this also works with a list of clients provided in the server configuration file (but it would be redundant information,

Exactly!

as clients are mentioned in the reproduction_setup anyways, to state ownership of loudspeakers).

So why don't you remove them from there?

In the simplest case, each client only has to know locally which loudspeakers "belong" to it.
The "server" doesn't have to know that, nor do the other "clients".

I think all of them should be doable with the same tools. Therefore, it shouldn't be limited to loudspeaker setups.

This however would require a server to also be a client or something like that

Probably. In the end, both "server" and "client" will be SSR instances, right?
So the only difference will be in the used renderer (which is currently a compile-time decision) and in the runtime settings.

and reproduction_setups being extended by non-loudspeaker renderers, but it could potentially be done.

Could be, but I think it isn't necessary.

The server in return would hold an array of reproduction_setups to drive

It could probably be done that way, but it would again be duplication of information and I think it would be unnecessarily complicated.

Why not just have a plain list of hostnames and ports?
The "server" shouldn't have to care which kind of renderers are used. Or how many loudspeakers they have, if any.

You could even decide to show a loudspeaker setup in the GUI of the "server" which is totally different from the one used in the "clients".
You might have multiple heterogeneous "clients", so there is probably not one universally meaningful setup to be displayed. The user could choose to display whatever setup makes the most sense in a given situation.

(which still wouldn't break with my idea to derive the clients from the reproduction_setup... it just requires the server to be able to do that from n reproduction_setups and keeping them separate).

I don't really understand what you are saying. But it sounds like the "server" has to juggle different reproduction setups, which seems unnecessary to me.

I think additionally to "knowing that they are in a networked mode", both client and server will have to know about each other from the configuration file.

Actually, I don't think that a "client" has to know anything about the "server".
It should know a-priori (from its own configuration file) which loudspeakers belong to it.
And it should know (from the same source) on which port to listen for an incoming connection.
That's all!

OTOH, the "server" doesn't really have to know a lot about the "clients" either.
Just their host name and port (which it can probably get from its own configuration file).
It may have some additional but strictly optional information, like e.g. the reproduction setup, but that's just for display purposes.

The clients are however mentioned by hostname in the reproduction_setup, so the server could potentially derive them from it, too, without the need of providing them in the configuration file at all (I know this breaks with the separation again,

Exactly!

but would be very convenient, given the assumption of a standard port to communicate with clients).

Oh, I think that's a very limiting assumption!

This way, it would be impossible to run multiple "clients" on the same computer.

For clients however that would not work and they need a server_name in their configuration.

What for?

Does that make sense?

Well it makes sense to some extent, but it seems unnecessarily complicated while at the same time unnecessarily limiting the possible use cases.

And breaking the current XML Schema isn't a problem at all, feel free to do that!

Are you sure?

Yes, if it's worth it.
You shouldn't break it just for the sake of it, though.

I really don't want to get in conflict with a potential current user base, that has to change its setup ;-)

That's life. Things change. We just have to document it properly and tell our users what they have to change.
I don't think that every user has hundreds of reproduction setups. Typically, they have one.

But as I said, the breaking change should be worth it.

@dvzrv
Copy link
Contributor Author

dvzrv commented Mar 27, 2017

Does the "server" just repeatedly try to connect in a certain interval?

Frequent polling. Yes.

So why don't you remove them from there?

Because so far they were the best solution for handling skips on the clients and again only require one file to configure the clients instead of n.
I guess a sequence of numbers in the configuration would also work, if you prefer that.

Actually, I don't think that a "client" has to know anything about the "server".

Well, it would be nice to have clients to not respond to random gibberish sent to them from other hosts, but I guess server hostname and port can also be sent by the server, if response is needed.

@mgeier
Copy link
Member

mgeier commented Mar 29, 2017

only require one file to configure the clients instead of n.

I think it is a good idea to have only one reproduction setup file that's shared (e.g. by copying or rsyncing it) between all SSR instances.
I don't think it is bad to have an individual configuration file for each instance. Trying to avoid this seems to make the whole solution more complicated.

I guess a sequence of numbers in the configuration would also work, if you prefer that.

I'm not saying that I prefer that, it was just a suggestion, without having put much thought into it.

What I do prefer is that the controlling of multiple SSR instances via network has nothing to do with the rendering of a partial loudspeaker setup.

We are always jumping back and forth between those two topics ...

Well, it would be nice to have clients to not respond to random gibberish sent to them from other hosts,

That's true, but if you are in a closed network, which you probably are with a distributed WFS setup, there shouldn't really be other hosts, right?

And we could still add a "server" whitelist to the "client" configurations, if desired. But probably that should be done with other tools like firewalls and stuff?

But I think not restricting this stuff makes for more flexible use cases, right?

What if you have several alternative "servers"? Does each client have to know about all of them?

Also, the current IP interface of the SSR is inherently insecure. I prefer to keep it that way, because then people know that it's their problem to deal with security. We shouldn't add some security features. Either full commitment or nothing. Latter seems to be less effort.

but I guess server hostname and port can also be sent by the server, if response is needed.

OK, that raises another interesting question: are you talking about UDP?

If we are using TCP, the connection would be bi-directional and this would be a non-issue.

@umlaeute
Copy link
Contributor

umlaeute commented Mar 29, 2017 via email

@dvzrv dvzrv changed the title ssr-wfs: networking ability for large-scale setups networking ability for large-scale setups Apr 1, 2017
@dvzrv
Copy link
Contributor Author

dvzrv commented Apr 1, 2017

I don't think it is bad to have an individual configuration file for each instance. Trying to avoid this seems to make the whole solution more complicated.

I guess we have been getting things mixed up in this thread (after all it is actually several topics).
I'm not against a configuration for each client, sorry, if it sounded that way. I was only interested in retrieving the information easily and was not about to skip the configuration file at all.

What I do prefer is that the controlling of multiple SSR instances via network has nothing to do with the rendering of a partial loudspeaker setup.

That's what I'm trying to do. I think the jumping back and forth between topics got us a little confused :-/

What if you have several alternative "servers"? Does each client have to know about all of them?

Also, the current IP interface of the SSR is inherently insecure. I prefer to keep it that way, because then people know that it's their problem to deal with security. We shouldn't add some security features. Either full commitment or nothing. Latter seems to be less effort.

i totally agree that SSR shouldn't be bothered with securing the internet.

Completely with you on that.
Several servers for one client is quite an edge case though. Then again, everyone's responsible for their own setup. ;-)
What I'm currently working on would not restrict that however... and who knows, after all there might be use-cases for it.

OK, that raises another interesting question: are you talking about UDP?

Yes, OSC most likely (which doesn't necessarily pinpoint it to UDP, as it's also TCP-capable).

@mgeier
Copy link
Member

mgeier commented Apr 4, 2017

I guess we have been getting things mixed up in this thread

Yes, definitely! You should probably open new issues for the individual topics.

Then we can also discuss the TCP vs UDP question ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants