Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Nginx Reverse proxy #21

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

robertcsakany
Copy link
Collaborator

Add support of nginx reverse proxy. It tracking all container where VIRTUAL_HOST env is defined,
automatically generate nginx proxy config for it.

As described in https://github.com/nginx-proxy/nginx-proxy. We use separate containers.
The HTTPS implementation as documented here: https://medium.com/@francoisromain/host-multiple-websites-with-https-inside-docker-containers-on-a-single-server-18467484ab95

By default the xip.io is used to be able make subdomains for IP address. So for example the nodered service can be accessed:

nodered.X.X.X.X.xip.io

where X.X.X.X is the IP address of IOTstack.

Any other domain can be used, on that case please replace VIRTUAL_HOST env variable of the given instance with the coresponding value.

HTTPS can be used, but xip.io method is not suitable for that, so as the links describes any container can be exposed with the definition of HTTPS proto, but some domain have to be defined.

@gpongelli
Copy link

Hi, I've two questions about Nginx container:

  1. is it possible to integrate authelia to have 2FA to access the other containers ?
  2. Do those VIRTUAL_HOST env variable interfere with docker containers' network settings like following ?
.... some container here ...

  a-container:
    container_name: ...
    image: ...
    ....
    networks:
      local-netw:
        ipv4_address: 172.20.2.10  <<-- this to set a "static" ip for this container 

.... other container ...

networks:
  local-netw:
    driver: bridge
    driver_opts:
      com.docker.network.enable_ipv6: "false"
    ipam:
      config:
        - subnet: 172.20.2.0/16

Is VIRTUAL_HOST used to set a container's IP ? because in my tests, when doing "docker-compose up" all the containers' IP address changes and related containers do not work.

Thanks

@robertcsakany
Copy link
Collaborator Author

robertcsakany commented Apr 20, 2020

  1. is it possible to integrate authelia to have 2FA to access the other containers ?

I don't think there is any thing to set in reverse proxy because of SSO. For SSO only matters that the URL be accessible for login and to able to get JWT token. It have to be set on Application level. There are some solutions where the Web server makes authentication via the given web token but I think its out of scope for this solution.
I think I understand what you wana achive. You woluld like to make accessible this to outside world and want to make some security before all of the containers. Am I right?

  1. Do those VIRTUAL_HOST env variable interfere with docker containers' network settings like following ?
.... some container here ...

  a-container:
    container_name: ...
    image: ...
    ....
    networks:
      local-netw:
        ipv4_address: 172.20.2.10  <<-- this to set a "static" ip for this container 

.... other container ...

networks:
  local-netw:
    driver: bridge
    driver_opts:
      com.docker.network.enable_ipv6: "false"
    ipam:
      config:
        - subnet: 172.20.2.0/16

Is VIRTUAL_HOST used to set a container's IP ? because in my tests, when doing "docker-compose up" all the containers' IP address changes and related containers do not work.

Not at all - only just the port have to accessed on the docker's given network. So it depends on the dokcer compose settings. The VIRTUAL_HOST variable is a marker for the nginx-gen conatiner to pick up and create configuration where nginx can map the exposed ports to the given virtual domain.
When the IP address changes the nginx-gen will regenerate the config because it listens on the docker daemon and monitoring the changes. While the docker network DNS is working - means able to resolve the internal hostname to the given docker IP address, in theory everything is fine. One thing I have to mention which is important: if possible the containers refer to others via docker container name. For example: if the mosquitto container name is 'mqtt' on that case use 'mqtt' as host name in other container. The reason is sometimes with some docker setings the domain resoultion is not accessible - means the given virtual host name cannot be resolved inside the container. But with default settings its not an issue. As a matter of fact I never used custom IP Address management driver. Can you give me some use cases?

@Slyke
Copy link
Collaborator

Slyke commented Apr 20, 2020

This is good, I was actually looking into this last year. I didn't get anywhere near as far as you have though. Only question is why use xip.io? We want to try to not rely on internet services (after the initial setup anyway).

When I did it, I used PiHole as the DNS server and placed my Pi's IP into PiHole's hosts file so that mypi/nodered, or without nginx mypi:1880 would work.

@robertcsakany
Copy link
Collaborator Author

robertcsakany commented Apr 20, 2020

This is good, I was actually looking into this last year. I didn't get anywhere near as far as you have though. Only question is why use xip.io? We want to try to not rely on internet services (after the initial setup anyway).

When I did it, I used PiHole as the DNS server and placed my Pi's IP into PiHole's hosts file so that mypi/nodered, or without nginx mypi:1880 would work.

For me it was the simplest, because I'm using this only local network, and only for just that to avoid the memorization of container ports.
For other solutions have to manage your own domain - which can be of course - Maybe have to be the part of the setup process. To achive that have to do some templating option of env files. Do you have any idea?

I don't use PiHole because I have 5 mikrotik routers / AP so the DNS management have to be there. I have a suggestion: the default configuration have to be dependent of the containers settings. For example: if somebody uses PiHole, on thet case the DNS given by that and all env files can be generated with that. Or we can give some other helper scripts which can modify services (generated) configs with sed - which can be called from setup.

@gpongelli
Copy link

  1. is it possible to integrate authelia to have 2FA to access the other containers ?

I don't think there is any thing to set in reverse proxy because of SSO. For SSO only matters that the URL be accessible for login and to able to get JWT token. It have to be set on Application level. There are some solutions where the Web server makes authentication via the given web token but I think its out of scope for this solution.
I think I understand what you wana achive. You woluld like to make accessible this to outside world and want to make some security before all of the containers. Am I right?

You catch the point! I wuld be able to access to all the other containers after a Single Sign On / token authentication, to have something more secure than all the user/password to be set for each container.

  1. Do those VIRTUAL_HOST env variable interfere with docker containers' network settings like following ?
.... some container here ...

  a-container:
    container_name: ...
    image: ...
    ....
    networks:
      local-netw:
        ipv4_address: 172.20.2.10  <<-- this to set a "static" ip for this container 

.... other container ...

networks:
  local-netw:
    driver: bridge
    driver_opts:
      com.docker.network.enable_ipv6: "false"
    ipam:
      config:
        - subnet: 172.20.2.0/16

Is VIRTUAL_HOST used to set a container's IP ? because in my tests, when doing "docker-compose up" all the containers' IP address changes and related containers do not work.

Not at all - only just the port have to accessed on the docker's given network. So it depends on the dokcer compose settings. The VIRTUAL_HOST variable is a marker for the nginx-gen conatiner to pick up and create configuration where nginx can map the exposed ports to the given virtual domain.

Ok, something is going to be clear... it's good to have this automatic generation.

When the IP address changes the nginx-gen will regenerate the config because it listens on the docker daemon and monitoring the changes. While the docker network DNS is working - means able to resolve the internal hostname to the given docker IP address, in theory everything is fine. One thing I have to mention which is important: if possible the containers refer to others via docker container name. For example: if the mosquitto container name is 'mqtt' on that case use 'mqtt' as host name in other container. The reason is sometimes with some docker setings the domain resoultion is not accessible - means the given virtual host name cannot be resolved inside the container. But with default settings its not an issue. As a matter of fact I never used custom IP Address management driver. Can you give me some use cases?

From my point of view, I prefer having a static IP address set into docker file.
This avoid having trouble with container's name resolution into all the other containers.
I've moved to static IP address because, in my docker-compose, I use a openvpn+transmission in one container that will use PiHole (in another container): openvpn shall be able to go outside my lan using PiHole as DNS but, each time I do "docker-compose up", docker changes the associated IP for all the containers.
This led to have an inconsistent PiHole DNS IP into openvpn container's configuration every time, because obtained as environment variable through docker-compose.yml.
Moving to the piece of code I've pasted, the openvpn always get the correct PiHole IP (they have both static IP and pre-associated into docker-compose.yml) and both of them always work correctly.

I know that my way is not so "Plug container and play", because some additional configuration are needed to be done into the generated docker-compose.yml file before running it, but it is the best I did (starting from 0% knowledge of docker ;) ).

Now, using those nghinx container, will I have issue with this kind of network related setup ?
I would have no issue if I'll put those nginx container into my compose file :)

Thanks,

@robertcsakany
Copy link
Collaborator Author

You catch the point! I wuld be able to access to all the other containers after a Single Sign On / token authentication, to have something more secure than all the user/password to be set for each container.

Okay. Because we have template in ngnix-gen which can be override, it may possible that SSO configuration be a part of that process of the virtual host generation. But to achive that it I think have to create another feature request. In the last month I've made openID and SAML SSO integration for Keycloak (same goal as authelia), so I think I can check it, but not in this week.

From my point of view, I prefer having a static IP address set into docker file.
This avoid having trouble with container's name resolution into all the other containers.
I've moved to static IP address because, in my docker-compose, I use a openvpn+transmission in one container that will use PiHole (in another container): openvpn shall be able to go outside my lan using PiHole as DNS but, each time I do "docker-compose up", docker changes the associated IP for all the containers.
This led to have an inconsistent PiHole DNS IP into openvpn container's configuration every time, because obtained as environment variable through docker-compose.yml.
Moving to the piece of code I've pasted, the openvpn always get the correct PiHole IP (they have both static IP and pre-associated into docker-compose.yml) and both of them always work correctly.

I know that my way is not so "Plug container and play", because some additional configuration are needed to be done into the generated docker-compose.yml file before running it, but it is the best I did (starting from 0% knowledge of docker ;) ).

Now, using those nghinx container, will I have issue with this kind of network related setup ?
I would have no issue if I'll put those nginx container into my compose file :)

As I understand (maybe I'm wrong) you are using the 172.x.x.x network addresses on your network to access the docker services. I think it's not the best practice. If you want to achieve that DuckDNS and OpenVPN port be constantly accessible I recommend use the host machine IP. If you are using 'host' networking mode for OpenVPN and PiHole in docker compose, and expose the required ports the docker network can be dynamic and the communications between docker instances can be made wit the internal domain name - which is the container name. We use to be use this approach for production environments without any problem.

@gpongelli
Copy link

As I understand (maybe I'm wrong) you are using the 172.x.x.x network addresses on your network to access the docker services. I think it's not the best practice. If you want to achieve that DuckDNS and OpenVPN port be constantly accessible I recommend use the host machine IP. If you are using 'host' networking mode for OpenVPN and PiHole in docker compose, and expose the required ports the docker network can be dynamic and the communications between docker instances can be made wit the internal domain name - which is the container name. We use to be use this approach for production environments without any problem.

I'm using the bridged network sincerely because the first time I opened portainer, all the IOTstack container were put into the bridge network.
I use the 'host' network only for homeassistant, that doesn't work in bridge mode.
I've to read some docker network manual to understand which one best fit my architecture, thanks!

@gpongelli
Copy link

I've done some reading and now it's clear: by default the network bridge is used, this is why I've initially found the container attached to the bridge network and why I've kept this way.
I've maintained the bridge also because I've installed unbound on the raspberry and I had some issue because many services did use the port 53 on my host.

Then I moved to a user-defined bridge network, statically setting all the IPs to resolve my issue as described above.

As stated into docker's manual :

User-defined bridges provide automatic DNS resolution between containers.

On a user-defined bridge network, containers can resolve each other by name or alias.

This allows to automatically resolve container IPs from other containers in the same user-defined bridge network as show in their tutorial.

With this mechanism (user defined bridge network), you could remove the dependency on xip.io service keeping only the container name.

@Slyke
Copy link
Collaborator

Slyke commented Apr 21, 2020

By the way, you can specify hosts to appear in a docker instance's hosts file by putting this in the compose file:

    extra_hosts:
      router: 192.168.1.1
      mypi: 192.168.1.2
      anotherdevice: 192.168.1.3

When I set PiHole as my DNS, I was able to ping mypi and it worked. This will work for any DNS set in docker. This could be set during the build time of the docker-compose.yml file

@robertcsakany
Copy link
Collaborator Author

robertcsakany commented Apr 21, 2020

This allows to automatically resolve container IPs from other containers in the same user-defined bridge network as show in their tutorial.

With this mechanism (user defined bridge network), you could remove the dependency on xip.io service keeping only the container name.

Have to differentiate the name resolution inside the bridge and in the outside world. The xip.io resolution required to be able to resolve all subdomains to the same IP, but the requested domain contains the virtual domain, so the nginx can decide what host can be used inside the bridge. Its important, because I (and a lot other users) don't use IOStack as DNS server. So the whole xip.io (or other domain) resolution is required to differentiate the virtual domain outside. The xip.io can be avoided when the local DNS server can resolve the host machine Pi domain name. I chose xip.io because for that no configuration required on any DNS server. Maybe have to make an option in setup.sh - where the domain can be replaced over the generation, similar like TZ is set.

@gpongelli
Copy link

Thank you both for this discussion.

I've to study extra_hosts, that seems interesting, because I would keep on my RPI the bridge networking instead of moving to "hosts network".

About nginx, if in future I'll use it on my RPI, I'll ask you some help :)

@janitorr
Copy link

Hi,
I´d really like to have reverse proxy ease handling multiple services.
I´m not very experienced with Docker but I do dable with some programing for my daily work.
What kind of work/testing/documentation would be required to bring this pretty dormand issue to the release version?

Thanks for your time!

@robertcsakany
Copy link
Collaborator Author

There is some differences between the initialization of containers. Some of them does not work the default networking of Docker - the port mapping can be on host directly which cannot be mixed with other networks - there are UDP ports based services. This can lead situations where IP Firewall and routing rules have to be changed in host machine, which is not a trivial task and we would like to avoid that type of complexity in IOTStack. When I will have time I will investigate a solution which can be applied for all of our existing container - I think Nginx will be replaced, instead of a LoadBalancer (like traefik or HAProxy) will be used - which can handle UDP/TCP ports also not only the HTTP ports.

@CharlesGodwin
Copy link

I noticed in file ./templates/nginx-proxy/directoryfix.sh that all chown commands are explicitly using 'pi' for user and group. Would this code be more flexible if it used the current user instead, as done in ./templates/python/directoryfix.sh?

@shayan-ys
Copy link

Hi, any update on this? Seems you have some git conflicts too.
This is really nice work! 🍺

@tunisiano187
Copy link

Hi everyone,
Isn't it more interesting to have this one ?
https://github.com/NginxProxyManager/nginx-proxy-manager
it's web manageable in a docker... so...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants