Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt has turned on stricter validation requirements (letsencrypt.org)
469 points by mmoez on Feb 19, 2020 | hide | past | favorite | 137 comments



The blog post is probably more understandable and relevant for readers here:

https://letsencrypt.org/2020/02/19/multi-perspective-validat...

Maybe we can get the link changed?


The current link [1] is a lot more concise and informative (specifically, what users should do) IMO. The blog post expands a little on BGP hijacking but honestly it could be summarized in one short paragraph.

[1] https://community.letsencrypt.org/t/acme-v1-v2-validating-ch...


I don't see how that less technical, more full of fluff post is better for readers of this website


I'm fine with keeping the original link. But the original post doesn't actually explain why this is happening, while the suggested replacement explains the problem.


It doesn't require 3rd party javascript to read?


Does anyone know of a “set it and forget it” alternative to Let’s Encrypt?

I’m all for making things more secure, but they’ve broken all of my certs in the last 12 months (in different ways, at different times), and I’m sick of it.


Regardless of what you end up using you need to have your own monitoring if you care about availability.

My own monitoring always alerts if certificate chains don't validate, HTTP->HTTPS redirection is dead, or if the certificate is due to expire in less than 10 days.

The various tools for interacting with Lets Encrypt might fail sometimes, but if you have monitoring you can fix them up as required - and without monitoring you'll in trouble regardless of who you use.


Another thing to consider if availability is crucial for you would be OCSP Stapling and accompanying monitoring for the stapled result.

OCSP out of the box is a call back to the issuing CA to verify that your certificate is still valid. Let's Encrypt like a lot of popular CAs implements this by having a CDN answer all live OCSP queries with bulk produced generic "This is still fine" answers for all the still-good certificates. But that means for any clients which check OCSP (some browsers do) that CDN is now on your critical path.

You can instead have your web server obtain (and periodically refresh) OCSP answers and "staple" them to its certificate when it answers HTTPS connections so that CDN isn't on your critical path any more.

However, some popular servers (most notably Apache HTTPD) do such a bad job of implementing OCSP Stapling that you're more likely to destroy your availability than improve it by enabling stapling (this is one of the things IIS actually gets right for a change), so make sure you understand what you're getting into. You will also need monitoring of the stapled response, because now if it's bad that might be a problem with your systems that you need to fix - whereas if Let's Encrypt OCSP is broken for the world you can be sure somebody else's on-call engineers are wrestling with it.


Completely agree. The emails you get from LE are nice, but often not enough and hard to manage at scale. I created Certera to help manage LE at scale (helps with rate limits, has monitoring and alerting of expiring certificates as well as failure to renew), also helps with key storage and rotation. I'm hoping to keep iterating and adding more features and functionality. Think of it as PKI for LE certs.

https://docs.certera.io


Same here. I had them running for a small informational website, and they took the site down for several days via surprise cert problems. I may switch to a paid provider.


If you give an email address to your LE client you should get an few emails from LE advising you when a cert is due to expire giving you some time to get it sorted before your old cert expires.

Edit: personally I also have a script that runs every day and checks the validity of all the certs in my live directory and pings me when they are due to expire as a second measure.


I most of the LE/ACME breakages I've seen, the issue was that something broke in certbot (or other client) when rotating in the new cert, not that it was unable to get a new cert issued. LE won't email you about that.


Which is why I monitor the live certs expiry dates as well. But that’s a bug with the client then with LetEncypt.


The big cloud services offer solutions for that, although with some caveats. AWS for example creates a cert free of charge with automatic renewal, but it won't give out any private keys. So TLS is terminated at their load balancer, which you would have to use to my knowledge. I just hope Amazon is too rich to conduct widespread industrial espionage.

I actually don't know if that works with domains not managed by them, but I believe so.

Sadly, certs from big CAs are pretty expensive these days... Let's encrypt is really awesome service to counter that.


>I actually don't know if that works with domains not managed by them, but I believe so.

Yeah you can use domains not managed by them, just have to throw a txt record given by them on on to the DNS for the domain to validate that you have control of the domain.


Not used them in production (only in my toy projects) but I’ve heard decent things about this https://www.buypass.com/ssl/products/go-ssl-campaign

They use the same acme system as LE so depends on what failed for you for it to be a viable alternative (if the certbot failed for example then it might of failed using buypass in the same way as it would of failed under LE)


I wonder how did it break for you? I've set it up once and it has been running off a cron job for months now (ACMEv2 + DNS auth plugin).


> but they’ve broken all of my certs...

What happened? Was it their fault or yours?


a regular certificate.

For example you could buy a Sectigo (previously Comodo) certificate for 2 years for like 17$ or so. Wildcard for 2 years is like 100$.

Its worth less than a broken certificate.


What do you use to automatically rotate the certificates from Sectigo? This doesn't seem to be a set it and forget it solution.

Also, with Sectigo you're more likely to get an actual broken certificate. They misissued nearly every certificate starting from 2002 until late 2019 [1].

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1593776


We rotate them manually once every two years. It works. No need for a certbot. Simpler infrastructure

Sure about the broken, but the alternatives are worse with possible leaked private keys.


How is that simpler? That seems like it doesn't scale and incredibly error prone.

If you only have one certificate you may get away with it. But if once you have hundreds or thousands this is absolutely going to break down the human factor.

And even if you do have a single (or few) certificate(s), there are other factors that are going to complicate maintaining this system:

  * What if a certificate needs to be revoked by your CA? Generally CAs are obligated to revoke certificates within tight deadlines (ex. 24hr for key compromise). That doesn't give a human a lot of time to replace the certificate.
  * What's going to happen when 2-year certs are no long available? Ballot SC-22 failed, but it would've reduced certificate lifetimes to 1 year. Some CAs are moving in this direction anyway, and it's worth noting that Sectigo supported this ballot.
  * What happens when the person responsible for renewing them leaves the company and forgets to hand-off the responsibility?
I almost see the infrequency of certificate rotation as a negative since it means the process is infrequently tested and easy to forget about.

Sure tools like cerbot can break, but if you know that it's renewing certificates 30 days out then setup alerting for whenever a certificate expires in less than 30 days. You should have this alerting anyway in case the human responsible for manual rotation forgets.

If you ended up in a state where you were serving an expired certificate then the key issue is your alerting.


And what timing! Just today it was announced certs trusted by Safari issued after Sep. 1st must have a lifetime of 1 year [1]

[1] https://twitter.com/chosensecurity/status/123025334823601357...


Absolutely not disputing what you say.

However 99% of people have no more than a handful - especially if you have wildcard certificates. And the incidental complexity of running certbot (even without the validation changes) are not worth it.

You are the perfect usecase for certbot. The rest of us aren't.

I'm not sure what you mean that the process can break. The way to use certificates are through haproxy/nginx/apache which are definitely more tested and stable than certbot. Half the internet still uses them and they support much more legacy than LE.

Letsencrypt was disruptive because it was free. It was not disruptive because of certbot.


If you rotated them manually every week then why not. Costly, not scalable, prone to errors etc.

There is probably a list of sites which forgot that there was a process. I saw that happening in Crédit Lyonnais (a french bank). On a Saturday night.


I’ve had a pretty terrible experience with Sectigo/Comodo over the last few months, especially thru their acquisition. At the moment, I have certificates that are managed in two different portals, one for their previous brand and one for their new brand. Customer support told me that there is no migration path to manage my certs all in one place.

We are going to repurchase all of our certs in Digicert this year with which I’ve had a much much better experience. They are well integrated with Azure Key Vault which allows us to automate certificate renewal and our AKS clusters will automatically get the new certs without us doing anything.


sure - i will take temporary usability issues (as comodo transitioned into sectigo), over fundamental security issues.

https://www.theregister.co.uk/2018/03/01/trustico_digicert_s...

if you are using your cloud certificates (built into lb, etc), then this is a moot point. You dont even need letsencrypt then.


Those prices look promising. I don't seem to find where those prices are in their webpage[1]. All I'm seeing is 359/719 USD a year for a wildcard certificate.

Am I looking in the wrong place?

[1] https://www.comodoca.com/ssl-certificate-comparison?key5sk1=...



This is a good start...

But I think it would be far better for them to focus on alerting webmasters if someone does manage to get a new certificate issued for a domain before the old one expires.

Certbot should reference the old certificate when doing a renewal. If someone registers a new certificate while an old one is valid and without referencing the old one, the owner of the old certificate should be notified loudly (sms, different-domain email, etc). Same if they register a certificate through a different provider.

Today all of the above is possible with certificate transparency logs, but nobody looks in them, so they're useless.


>Today all of the above is possible with certificate transparency logs, but nobody looks in them, so they're useless.

I check mine once or twice a month manually, but it is pretty trivial to monitor it automatically as well, e.g. there are APIs for crt.sh or they even offer direct public read access to their database.

I believe certificate users should remain responsble for their own monitoring. Alerting as you say would be very annoying since you couldn't preemptively replace certs without getting alerted unecesarily, and it would divert Let's Encrypt developer resources away from more useful projects.


I built this project to make such automated monitoring easier:

https://ctadvisor.lolware.net


Love the site, great sense of humor.

Wish you all the best with this project.


Many thanks! And thank you to the people that signed up to try it out.


Seems like a great solution, I will test it for a few domains.

Multi domain registration could be nicer ;)


https://developers.facebook.com/tools/ct/

You can check your domains there or subscribe for email notifications.


> very annoying since you couldn't preemptively replace certs

You would sign your request for a new cert with the private key of your old cert, proving you are the same person, which would suppress the alert.

Obviously the key provisioning tooling would do that automatically, so you needn't do it manually.


Maybe you actually need the security, but a lot of people just got certificates to avoid browser warnings. Saying every non-technical guy with a blog they update once a year should set up their own monitoring is a ridiculous ask.


Many LetsEncrypt users are one-man shops or side project owners, they don't WANT another burden like monitoring certificates that doesn't add to the bottom line (yes, I know a non-working certificate will be very painful for a side project).


How can someone "reference" an old certificate with CT data? CT logs only contain precerts and certs, right?

I suppose one could compare the public key of two leaf certificates, but reusing the private key is also a bad practice.

From CA end, it could check if the certificate is requested from the same account key. One can even use a DNS record to public the account keys (CAA + TLSA).


For readers interested in understanding the technical details about potential BGP attacks on domain validation, which serve as a motivation for Let's Encrypt's multi-perspective validation deployment, see the following paper from USENIX Security: https://www.usenix.org/conference/usenixsecurity18/presentat...


And for background on the multiple perspectives validation approach:

http://www.cs.cmu.edu/~dga/papers/perspectives-usenix2008.pd...


This was excellent. Thanks for sharing.

Think about the contents of that video/PDF for a second. The guys/gals doing that bit of research at the university are clearly smart, but imagine what a state level actor can do?


I don't see how this resolves BGP hijacking attacks? If I announce a more specific route all four locations should still land on my hijacked network... Or is this trying to race the BGP propagation?


It doesn't "resolve" BGP attacks. The point is an attacker would have to pull off three or four successful attacks at once, which is harder than pulling off just one, especially if they hope to go unnoticed.


Or of course for smaller targets their attack is just much closer to the target than to any of the multiple viewpoints, and so this mitigation makes no difference to them whatsoever.

We shall see how well this works in practice. Assuming it's relatively cheap it's harmless to at least try.


If the domain is hosted on a /24, then more specific routes will be filtered out by default in most ASes. Also, with the increasing adoption of RPKI, in conjunction with its maxlength option, more specific attacks will be a challenge for adversaries.


If you announce a matching /24 it will fall back to shortest distance. If the original network has a long AS path, you can still hijack from a moderately well connected network.


I may be naive, but it seems like it might be more secure if the first step was to deploy a self-signed cert on the server, step 2, give Let's Encrypt the public key of the self signed cert, so Let's Encrypt can validate who you are, then proceed with Let's Encrypt's regular validation process, obviously replacing your self-signed cert with the one issued by Let's Encrypt at the end.


Wouldn't an attacker be able to create a self-singed cert just as easily?


Self-signed. Yes, an attacker would not ordinarily find this harder to pass than the http-01 challenge today. Validation using this approach was method 3.2.2.4.9 ("Test Certificate") and is no longer permitted for new issuance under current Baseline Requirements.

Let's Encrypt offers three ACME methods which implement 3.2.2.4.6 ("Agreed Upon Change to Website"), 3.2.2.4.7 ("DNS Change") and 3.2.2.4.10 ("TLS Using a Random Number").


> 3.2.2.4.9 > Baseline Requirements

Where can I find these details? Sorry if I'm being a bit dense here.


The CA/Browser Forum publishes the Baseline Requirements to their web site

https://cabforum.org/baseline-requirements-documents/

In recent years the BRs are using RFC 3647 structure. This RFC gives an outline for how to write policy documents for PKIX (X.509 Public Key Infrastructure for the Internet) and rather than wrestle with each organisation having its own preferred way to organise much the same information the trend is to require RFC 3647, so you know the stuff about names will be in section 3 for example

The RFC 3647 structure doesn't break down as far as 3.2.2.4 but 3.2.2 is where people explain how they're going to validate organisation names, and so in the Baseline Requirements 3.2.2.4 is where the "Ten Blessed Methods" are described, the authorised means by which public CAs can determine if the name you want a certificate for is really yours.


Thanks for sharing that, friend. Appreciated.


Indeed, the self-signed cert idea doesn't work in this context.


Sounds like the TLS-SNI-01 challenge that had a fatal flaw in some circumstances: https://community.letsencrypt.org/t/2018-01-09-issue-with-tl...


The problem with tls-sni-01 is that it assumed nobody would be crazy enough to let you configure their HTTP-only web server to answer HTTPS requests for their names. So logically if a server answers HTTPS requests for a name, on an IP address that DNS says is the right address for that name, that must be the right server, no?

But it turns out cheap bulk hosting sites, especially using Apache HTTPD often did this because it worked fine by default.

The symptom for ordinary users would be you try to visit https://cat-videos.example/ and it gives a certificate error saying the site has a certificate only for aaa-microwave-repairs.example do you want to continue? If you say "Yes" you get an error page. Eventually you remember it was http://cat-videos.example/ no need for the 's' and that works. Weird but ultimately harmless.

What has happened is the Microwave repairs people paid for working HTTPS, with a valid certificate for their name, the Cat Video people didn't bother. But both set the bulk hosting site as the correct IP address for their servers.

Now, when you connect to that IP address and ask for cat-videos.example using SNI, the remote server should go "Er, no?" and you just get an error. But Apache's default behaviour instead figures you want the default web site and default certificate, which will typically be alphabetically first on that server.

This destroys the security assumptions for tls-sni-01, because "it's safe unless people use cheap bulk hosting" is essentially identical to "it's not safe" and getting Apache to fix things was too late. Bad guys could sign up for a bulk host used by their target for non-TLS sites, and add a bogus site named like aaaaaaa.bad-guys.example and use this to attack the target with tls-sni-01 challenges.

So, the replacement ACME challenge doesn't rely on SNI it uses ALPN instead. Also some people actually tested to check that Apache isn't also dumb enough to go "Um, I don't recognise this ALPN, I guess that means I should press on anyway and cause breakage" which fortunately it is not.


This kind of behaviour also enables DNS rebinding attacks. An nginx with a default_server returning "403 Get Lost" helps. (As does only allowing TLS).


Excellent explainer, thanks.


Wait... How does letsencrypt know who is giving them the public key cert?


Presumably they can then confirm that the public key matches the certificate that's currently on the webpage.

Unless someone successfully preforms a MITM attack on letsencrypt but then all bets are off.


Given that this article is about mitigating the risk of someone doing precisely that, i don't think "all bets are off" is a good position to take on that scenario.

To summarize TFA: lets encrypt is now verifying domain ownership from multiple data centers. The idea being if someone tries to mitm the verification process (through bgp hijacking or whatever) its much harder to do that across the entire internet and go unnoticed then it is to do it on just one network path


I've also received a notification email about my outdated acme client, thanks!


We'll see how it works in practice. After reading the intro to what they're doing I think I'll update a few certs before they expire to make sure it works, but from what I've read I don't anticipate any problems.

LE has been great for me.


FYI, Last week, before I unblocked my firewall, when I tried to get certs from LE, it tried to knock me some 5 or 6 times from different IPs. Checked some ASNs, one was from AMAZON, the other one I don't remember.


TL;dr - can this be fixed by updating to the latest certbot package?


You don't need any update.


It's already painful to get Let's Encrypt set up in a web farm scenario. This won't make it easier.


It's not going to make it easier, but if you can already answer challenges - then it won't make it any harder.

You just don't stop answering challenges until the cert gets issued.

I don't know why anyone would have stopped answering challenges after the first request anyway. Surely the default assumption should be that everyone has network & processing failures, and hey maybe ACME would need to check again if something broke before the cert was signed.


You should check out https://community.letsencrypt.org and ask for help there!


How does it make it any more difficult?


I hear this often and also had issues with this. Shameless plug for my project, Certera. Solves that problem and much more.

https://docs.certera.io


If you or your company has enough money for a web farm, you should just buy your own cert.


What? Web farms are automatically expensive? A web farm can literally be two Raspberry Pis behind a simple load balancer. What if this isn't "my company"? What if I run a web farm at home for fun?

Besides, your argument could be used to justify any price hike! "If you can afford X, then you can afford Y!"


Why isn't your LB terminate TLS?


Does this require the user to make any changes?

I run a simple static website served with Nginx. I would like to know if this change has any impact on me.


No changes are required, unless you had some kind of IP restriction on who your server may be contacted by. Which your simple static site is probably fine with.


The answer is in the article.


The first paragraph already highlights the issue I have with it. I don't want to loosen my firewall, some countries and their IP ranges just need not access my server.


I have my certs issued to a VM this is not accessible via HTTP(S) anyway, just DNS. It runs a small DNS server which answers to _acme-challenge.* requests "forwarded" via CNAME. As that is the only part of is visible to the rest it has a very small security surface to worry about. It then pushes out the keys & certs to the places that need them (directly in the case of local machines, less directly for some others).

I set that up originally for a wildcard, as http validation is not supported for them and I didn't fancy mucking about automating updating the main bind instances, but it is convenient for the others too and will be unaffected by these changes.

Not a perfect solution for everyone of course, copies of all the keys are in one place for one thing, and it all in one place could be bad or good for maintenance (single point of failure, but single service to monitor & maintain), but worth considering if you expect problems with http validation.

> some countries and their IP ranges just need not access my server

I'm guessing that they won't be using locations that are commonly blocked in this way anyway. And if they do happen to use one, you may be fine as only two of the three external checks need to be responded to.


You don't even need the DNS to be local and expose anything at all to the internet. You can just as well push the TXT records to your usual DNS through your registrar's API. This generally just works.


Agree. Though I would rather state it "just" works. You need to watch for DNS propagation, particularly if your DNS provider has some sort of CDN-like features (which is the case for instance of OVH) which makes the timing of the propagation non deterministic and non observable (you will likely be served by a different DNS server than the let's encrypt bot, how can you check it has propagated?).

Let's encrypt will only check the DNS entry once, if it doesn't find it, it fails the authentication process and doesn't retry (contrary to the specs).


I still run my own DNS servers, so for me "your registrar's API" is directly poking bind and I only have myself to blame if anything goes wrong! Keeping the little DNS service for the certs means if I break that the rest stays sane. I also don't have to worry about delays transferring changes to zones if there are any intermittent network issues.


I've read about this a few times now, but have been unable to find a good resource on how to set such a VM up. Do you have a link or resource somewhere so that I can at least get started?

Still unexperienced with letsencrypt, but I know enough that I cannot use the standard way.


I think what they're doing here is running a DNS server on a VM that only answers to requests for specific ACME address prefixes. Then at the actual domain, say example.com, they may ask for cool.example.com but it's a CNAME record that directs to something the DNS server is authority over, say xyz.io or something along those lines.

Therefore ACME's DNS request for checking via DNS validation is validated directly by the tiny DNS server due to the CNAME record directing traffic to it.

I've never done this my self neither, but I think it would be along those lines.


There are a fair few examples out there for hooking acme-dns in as the DNS server with the standard certbot tool. SO search for acme-dns for more info.

Though that isn't actually how I do it (I'm using dehydrated with my own hook script to update a bind9 instance) but probably would be if I started again from scratch.


> I've read about this a few times now, but have been unable to find a good resource on how to set such a VM up.

You first set up a VM and set up your favourite authoritative DNS software on it: popular choices are ISC's BIND and NLnet's NSD. Either will do. Call it (e.g.) ns-dnsauth.mydomain.com, which is Internet accessible only on udp/53 and tcp/53.

You have to then configure that DNS server to serve the domain (e.g.) dnsauth.mydomain.com.

Next you configure the DNS server software to allow dynamic updates. For ISC BIND, you can set up (crypto) keys and use the nsupdate(1) utility:

* https://www.zytrax.com/books/dns/ch7/xfer.html#allow-update

* https://dan.langille.org/2017/05/31/creating-a-txt-only-nsup...

Point your public/external DNS records to your delegated-auth server by having (say) _acme-challenge.www.mydomain.com be a CNAME to (say) _acme-challenge.www.dnsauth.... LE will follow the CNAME and try to do the verification against the record in dnsauth sub-domain that lives on the ns-dnsauth VM.

Then you have your LE/ACME client(s) run a hook script to publish (and cleanup) the dns-01 TXT challenge records:

* https://dan.langille.org/2017/07/04/acme-sh-getting-free-ssl...

* https://github.com/dehydrated-io/dehydrated/wiki/example-dns...

The LE client goes to the LE API, gets a verification token/nonce, executes the the hook script to push the TXT record to ns-dnsauth, the LE folks verify the record, the LE client (ideally) cleans up the TXT record, receives the cert for the LE API, puts it in the correct path and restarts your (web) service(s).

Someone actually wrote a limited-functionality DNS server that allows for pushing of records via a REST API for this purpose:

* https://github.com/joohoi/acme-dns

This way the 'heavier' BIND/NSD software doesn't have to be used, as those have more features than are needed.


For publicly-accessible infrastructure:

4-6 instances of pdns authoritative for a domain, and pdns recursor running locally for each box. And Cloudflare free tier while revenue can't justify rolling-out Varnish and other locally-deployed capacity/DDoS mitigations.

It may also be a better idea to push DNS updates via configuration management or driven from something like Envoy so there's a history and a single-source-of-truth (SSOT) to point-to rather than multiple people doing manual tinkering, which is a labor-intensive, antiquated approach.


Is "pdns" PowerDNS?

* https://doc.powerdns.com/authoritative/dnsupdate.html

If we're just talking about issuing certs, I don't know why one need 4-6 instances for serving the dnsauth sub-domain.


And how does filtering on the IP range of a country prevent traffic from that country? Or are you also blocking every known proxy and VPN service? Are you sure that your strict firewall even provides the security you wish it did?

Either way, letsencrypt will probably have a somewhat fixed pool of these challenge validation IP addresses/ranges, so adding those to a white list should probably work with a strict firewall.


I'm pretty sure that even a 50% mitigation is still valuable, and a remarkable number of attacks are completely unsophisticated. Having gone over server logs specifically looking at this sort of thing, I can tell you that a lot of attacks absolutely do appear to originate in countries that are vanishingly unlikely to have legitimate interest in the site (this was a small local business in the US, with logs showing a lot of `/wp-admin`-type attempts from Asia/Russia). So yes, blindly banning unexpected countries is very likely to reduce the number of attacks that you get, and if nothing else reduce log spam. It's a bit like changing your SSH port: easy to bypass, but it still reduces attacks by quite a bit.


>but it still reduces attacks by quite a bit.

But that's a wrong metric to optimize for. Cutting down the number of "attacks" by 90% doesn't improve your security. If your server has a vulnerability that can be picked up by script kiddie scanners, changing the default port only delays the inevitable.


I don't agree with this at all - 'delaying the inevitable' is 90% of security; it's the basic assumption when practicing defense in depth.

You scan for and patch known vulnerabilities, and assume that some unknown ones still remain. You deploy WAF to block some of the unknowns or at least make them harder to exploit. You harden the host and segregate the network to make it harder for the attacker to move laterally when they manage to exploit something anyway. You use SIEM or just regular log review to hopefully catch attackers that have been delayed by your defensive measures.

Putting your service on a non-default port is a perfectly valid measure as part of a larger defensive strategy. It has its pros and cons, you've got to be aware of what threats it mitigates and what it doesn't, but it can be useful.


I think the idea isn't to increase security, but to decrease log spam from attacks that are attempted but won't work.

For example, if you don't have WordPress, then all the attempts to access /wp-admin will never work, but will fill your logs with 404 errors.


It does improve security by improving the SNR in logs.


I find that a very good, low cost, low friction measure is to do IP based mitigation only for sensitive ports (ssh, rdp, smb, etc). Ideally you implement an IP white list that you store in a safe and reliable place (cloud storage?). And your servers refresh that IP list every 5 minutes and modify the firewall if it changed. Easy to implement.

Only you can talk to sensitive ports. And the server is available to the rest of the world for non sensitive things. And if you connect from a new IP, within 5 min you have access to the server (I have a scheduled task that updates the IP list with my current IP so I usually don’t even wait).


> I'm pretty sure that even a 50% mitigation is still valuable

It’s about as useful as wearing a condom 50% of the time.


Which is to say, sometimes valuable, sometimes "oh crap!"?


That kind of sounds like you are implying cutting down 50% of attacks is the same as cutting down 50% of the risks. But not all attacks are equal. Only a very small portion of the risk comes from the 50% you blocked.

If you weren't trying to say that, my mistake.


It blocks unsophisticated resource waste mostly, as other commenters have said, just like changing an SSH port would.

> Either way, letsencrypt will probably have a somewhat fixed pool of these challenge validation IP addresses/ranges, so adding those to a white list should probably work with a strict firewall.

They won't publish the ranges as far as I've read.


Then use a different method of validation, like DNS?


DNS validation once setup is so much easier to work with, for all use-cases I can imagine, I will never go back to HTTP-based authentication.

Really, do give it a try.


Impossible (to do safely) with most providers I've seen, even CloudFlare doesn't offer a properly limited API key.


So delegate the validation with a CNAME and do it somewhere that’s not cloudflare. They provide signing with their own CA for origin certificates so I don’t really know why you’re using Let’s Encrypt at all.

Your idea of just randomly blocking access from certain IP address ranges doesn’t really provide you with any security at all. If you’re worried about rusaian hackers or whatever, most people exploiting anything have access to botnets with whatever bespoke IP address ranges they need to bypass those sort of rules.

In anti fraud we see this commonly, people using stolen details will happily get better matches with GEOIP than the legitimate users of the credentials. Blocking specific countries IP allocations is just providing a false sense security on your part.


Even with CloudFlare in front of your servers, it is still valuable to use Let's Encrypt certificates. You can turn on Full Strict SSL validation to the backend and reduce another attack vector. It's an unlikely attack vector to be exploited but it's also a trivial amount of work to implement.

Every layer of security makes attacks that much more costly.


Like I said cloudflare signs certificates for your origin as well.


And if you always use Cloudflare (or at least are sure you'd have days-weeks not minutes-hours between making a decision to cease using Cloudflare and actually executing) then it's actually safer to tell them you'll use their Origin certs rather than a public CA as well as likely being easier.


> So delegate the validation with a CNAME and do it somewhere that’s not cloudflare.

Like? For what price?

> Blocking specific countries IP allocations is just providing a false sense security on your part.

No, it's a preventative measure. Just like changing SSH to a non-standard port reduces pointless attempts.


> Like? For what price?

If you own example.com, you can delegate to dnsauth.example.com for $0 (or simply the price of a Internet-facing machine that has DNS open).

Say you want a cert for www.example.com. LE will check for ownership by looking up _acme-challenge.www.example.com. Instead of having a TXT record with the nonce, _acme-challenge.www is actually a CNAME pointing to _acme-challenge.www.dnsauth--where the TXT nonce lives.

The DNS daemon that is authoritative for dnsauth can be the traditional BIND, or other software:

* https://github.com/joohoi/acme-dns

This is often called 'DNS alias' mode:

* https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mo...

* https://www.eff.org/deeplinks/2018/02/technical-deep-dive-se...


I did not ask "how", I wished to know who supports a DNS service like that and for what price.


> I did not ask "how", I wished to know who supports a DNS service like that and for what price.

And as I stated in the very first sentence, it is self-serve:

> If you own example.com, you can delegate to dnsauth.example.com for $0 (or simply the price of a Internet-facing machine that has DNS open).

We do this at work: our main registrar does not have a restricted API, so we have a sub-domain that lives on a DNS server in our DMZ. Internal ACME clients update the desired TXT records when asking LE for a cert.

The cost is the price for keeping a VM running and updated, which for us is minimal since it is on our private cloud.


Any DNS service which allows you to create a CNAME RR supports it. You delegate the subdomain to any DNS server you wish.

This isn't some special "Let's Encrypt DNS forwarding mode" that DNS providers have to explicitly support. It's simply part of "how DNS works".


> Any DNS service which allows you to create a CNAME RR supports it.

And which of those also have an API that is supported by Certbot?

I would really like names where a setup like this has been tested and works.


> And which of those also have an API that is supported by Certbot?

Certbot allows for hook scripts, and you can use a utility that can talk multiple APIs:

* https://github.com/AnalogJ/lexicon

> I would really like names where a setup like this has been tested and works.

The guy who runs BSDCan and PgCon uses it for his personal stuff as well as FreshPorts.org, etc:

* https://dan.langille.org/2017/05/31/creating-a-txt-only-nsup...

* https://dan.langille.org/2019/02/01/acme-domain-alias-mode/

He used acme.sh, though I'm more partial to dehydrated:

* https://github.com/dehydrated-io/dehydrated/wiki/example-dns...

We use it at work, but I don't want to dox myself. :)


The parent stated that you can run your own DNS server temporarily for the cost of the hardware to run the server and shut the DNS server off after the certificate has been issued. The cost is basically free.


If you have that kind of security profile that you need to lock down what ip addresses that can make a port 80 connection, and can't use the api of the dns provider because of security considerations, then how is giving a third party dns provider full control of DNS acceptable within the threat model?

Run your own DNS server and get a registrar lock. If that is not feasible, and consultants are too expensive, I would look into the more expensive dns providers that provide custom interface that fit the threat model of your system. If that is also too expensive then I would take a second look at the risk analysis and recalculate the cost of each risk.


Why can't you do this safely with AWS? You can restrict the API key to write only to the zone _acme-challenge.<your domain>.


Smart way to do it, but it will set you back $0.5/month. You can likely do it cheaper with Lambda and API Gateway, but you'll have to invent the secret sauce yourself.

I'm always a little surprised when there are short comings in IAM like that with route53 and records. It seems like a natural thing to be able to control, but for some reason you don't have resource level controls on hosted zones. It's all or nothing.


This is now supported as of a few weeks ago! I just set it up for a new domain using a cancel-able API key.


I've just checked it, but with API tokens, I can only allow 'edit' rights on DNS records for a specific domain.

There is no way to create a token allowing access only to _acme-challenge record.


You can use the CNAME trick to canonicalize all ACME challenge requests into a subdomain you reserved for this purpose and then give the tokens access to that subdomain.

Let's Encrypt is obeying normal DNS mechanics, so when they ask for a TXT record for _acme-challenge.cat-photos.example.com and get a CNAME as a response, they'll ask for the TXT record for the name in the CNAME answer instead. If that's cat-photos.cert-issuer.example.com then a token valid only for the sub-domain cert-issuer.example.com can write that TXT record.

You sort out the CNAME once, probably when creating cat-photos.example.com or setting it up to get a certificate, and then afterwards the API token is enough for automation.


You can use pre and post renewal hooks in order to temporarily disable those firewall rules.


You could just loosen the firewall for a few minutes before and after you requested DNS validation.


Yeah, looks like you have created a problem there for yourself. Back to other SSL providers for you, then. Unless you want to jump through some hoops.


I’m with you I have China completely blocked from my servers. There is no reason for anyone in that country to access my apis


Recently, the U.S. government accused four Chinese nationals in the Equifax hack. The U.S. says they made us of something like 34 servers in 20 countries to carry out the attack.

Do you really think you're stopping anyone in .cn who wants to actually connect to your server from doing so?

Proxies and botnets are a thing.


I don't care - my country block is a two second thing that cuts off the vast majority of regular people hitting my server.


"Limiting attack surface" is also a thing. Claiming there's no benefit is empirically false.


Too bad I can't read them because the site doesn't load at all with 3rd party scripts disabled.


It took me three tries with scripts enabled; perhaps they're overloaded.

(Taking 5MB and 2s to send 500 words of text can do that, but since the primary audience of Let's Encrypt is web folks, there's some schadenfreude in my weltschmerz.)


Works fine for me, uMatrix with javascript disabled by default.

Edit: Ah, upon testing it breaks if you have 1st party JS allowed but not 3rd party. This is pretty reasonable in my opinion.


For me it seems to also require that you turn on "Spoof <noscript> tags" in uMatrix.



Thank you


It works for me with NoScripts blocking the two js requests it tried to make.


Well, it works if all scripts are disabled (at least for me).


Is that because assets are hosted in a CDN?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: