-
Notifications
You must be signed in to change notification settings - Fork 823
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid response #211
Comments
I'm on the latest version and I am having exactly the same issue. Letsencrypt challenge requests are passed onto the container behind the proxy, which results in a 404 being passed back to LE. |
I'm having this too BUT right after that I get another request that nginx handles correctly and the validation ends up succeeding despite the "CA marked some of the authorizations as invalid." warning. I don't get what's going on at all. I was already using a version of the container including #192 when I noticed this behavior. |
I am running the latest version but the problem still exists. The log on letsencrypt keeps saying "Invalid response" because my API server is handling the request, not the Nginx prox in front. Anything I can try? |
I'm facing the same issue. I'm able to go to the link itself but validation fails.
|
I seem to have solved my issue by removing my certs folder and the container and letting the letsencrypt companion start from scratch. Also make sure you have the vhosts folder mounted on nginx-proxy =) |
That only delays the problem, when those new certificates need to be renewed again. Any solution yet? |
Removing the certificates doesn't help me. It creates empty folders for each domain, same validation error. I have a certificate that expires tomorrow, what to do? |
Sounds like misconfiguration or outdated container somewhere, one of my own production setups correctly renewed two certificates the past two weeks, and correctly generated a new one for testing purpose just now.
No LE challenge request passed to the proxyed container anymore, I have no idea why I got that last month on another server, I probably had a configuration issue myself that I don't even remember fixing. Could you tell us more about how you run the nginx-proxy + letsencrypt-companion containers ? |
I use docker cloud, with the following stack:
|
Unfortunately I am totally unfamiliar both with docker cloud and with the single container approach to nginx-proxy, so I don't think I'll be able to help you troubleshoot much. If it can be of any help, here is my working docker-compose file:
I get the nginx.tmpl file (the exact version I'm using right now is this one), create the network nginx-proxy, then I'm good to go. You can use |
Just tested it again on a fresh install of debian 8 and docker. simp_le self-verification fails while on my ubuntu 16.x and 17.x servers it works ok. I'll do more test later and try to understand why. verification by LE then proceeds ok, the certificate gets created and I can browse to my test app.
|
I just ran into a CA authorization error while performing additional tests. After trashing all the four docker volumes I use in the docker-compose file (conf, vhost, html and certs), CA authorization started working again. |
Could it be that the CA authorization doesn't follow redirects? It's trying to access the http, but it has a permanent redirect to https. This could explain why it works the first time, but not on renewal. And why it works when we paste the URI into the browser. It's giving timeouts, when trying to access. I'm going to try to change the template to add the location to the redirects. |
Still giving timeout without the redirects. |
Could it be an issue with ipv6? |
I don't know the internals but I think LE will always prefer validation through IPv4 if available. I have both IPv4 and IPv6 configured on my hosts and DNS for proxyed services resolve to both addresses but I never saw an IPv6 request from a LE server on any of them. I would advise against modifying the template as it would make further issues even harder to troubleshoot for you. Renewals do work perfectly fine on my already setup proxy stacks with the vanille nginx.tmpl, so I still think you have a configuration file somewhere that prevents CA validation, either one of your own or a container generated one that's stuck in a bad state. Reverting your stack configuration to something closer to base configuration would give you a clean start. More specifically, change from :
to something like
Again I'm not familiar with docker cloud, my idea is to use freshly created named volume to revert all configuration dir/files to base state, check if that gets CA validation to work again, and if it does try adding your own custom configuration files one by one like this:
until you find which one prevents CA validation. Also, are you sure your proxied containers are configured properly ? Better check that too. |
It was indeed the IPV6! The CA authorization chose ipv6 over ipv4. I remember docker having some issues with IPV6, which will need some further testing in my setup. After removing the AAAA record for my domain, the conection went successfully through IPV4 and the certificate got renewed. |
I think at some point we might have to add a troubleshooting guide to this container. Do you have any insight on why LE chose IPv6 over IPv4 to reach your domain for validation ? And on why did the nginx container failed to answer properly to the request made over IPv6 ? |
Docker has support only for IPv4 by default, I'll probably just need to enable the dual stack in the daemon at this point. I'll try it later when I can. It makes sense that LE would prefer IPv6 since it's the future. We should be pushing everyone to it as much as possible. |
I did not enable the dual stack in docker daemon and yet my proxyed services and the ACME challenges are reachable both through IPv4 and IPv6. This might be related to the fact that the containers are connected to a user created bridge network (the nginx-proxy network in my docker-compose file), not to docker's default bridge network. |
Did you have to do anything for IPv6 or does it work by default? |
I did not configure anything specific on the docker side, the only IPv6 related config I did on each host was setting up the correct addresses (static) on the real public-facing interfaces. The command I use to create my docker network is the following:
the The results of
Local IPv4 address, link local IPv6 and that's it. Edit: by the way you did put your finger on something else. When a certificate is present, no matter if it is valid or not, the nginx.tmpl will add a 302 redirect to https. That mean that if, for a reason or another, one of your certificate expire, you won't be able to renew it without deleting the old one first as the CA validation will be redirected to https with an expired certificate and will fail. |
The Let's Encrypt CA, Boulder, does follow redirects on HTTP-01 challenges (up to a limit of 10).
That was true historically, but changed recently.
The presence of an AAAA record for the domain will be used to infer that IPv6 should be attempted first.
If an HTTP-01 challenge request received on 80 gets redirected to port 443 Boulder will ignore certificate errors to prevent this sort of configuration from breaking validation. It should be OK if I'm understanding correctly (my docker-fu is extremely weak). Hope these clarifications were helpful! |
They were extremely helpful, thank you @cpu ! |
For anyone getting here in need of troubleshooting, here's how you know if the problem is IPv6. If you run the container with debug on (
Notice that If you have a AAAA DNS record, make sure the address is reachable with a tester such as http://ipv6-test.com/validate.php. |
I had a smilar issue and fixed it by downgrading to |
I had a similar issue but it was not IPv6 related. After many hours of trying to resolve it I came to the following solution: LetsEncrypt kept receiving "503 Service Temporarily Unavailable" as a response to the acme-challenge and I got the same message in the browser. The trick to resolve this was to remove After resolving this the response to the acme-challenges were now "403 Forbidden" errors from Nginx instead. The files in I think it's really weird that I had to make the directory readable for "other" users as the directory and all files are owned by But I had the same problem with some static sites that I'm running by just starting an nginx container and mounting. The files are readable from inside the container but Nginx won't serve them. |
I removed AAAA record from my domain records and now I'm getting certificates from Let's Encrypt. A month ago I had another server on which I had set AAAA record and I got my certificates from Let's Encrypt. Their is some random magic happening. But at this moment I still get this error messages, before eventually I receive the certificates:
|
The Let's Encrypt validation server was changed to prefer IPv6 for dual-homed hosts just over one month ago: https://community.letsencrypt.org/t/preferring-ipv6-for-challenge-validation-of-dual-homed-hosts/34774 No magic in this case 🐰 🎩 ✨ |
Closing issue due to inactivity. |
Certificates are not renewing. I get the error: CA marked some of the authorizations as invalid.
When I look at the logs, I see that the result is unexpected by Letsencrypt. When I look at my custom server, behind the nginx proxy, I can see the incoming requests for .well-known/acme-challenge.
This should not happen right? Nginx should handle the .well-known/acme-challenge request and not pass it to the server behind nginx? How can I prevent this and let Nginx do the .well-known/acme-challenge so my certificates renew automatically.
The text was updated successfully, but these errors were encountered: