F-Droid site certificate expired
Then, when you have only two or three big SSL providers, it's way easier to shut someone off by denying them a certificate, and see their site vanish in mere weeks.
- We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted. The scare tactics used to sell VPNs in YouTube ads used to all be true, and no longer are, due to this.
- We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
- We went from a CA ecosystem where only commercial alternatives exist to one where the main CA is a nonprofit run by a foundation consisting mostly of strong proponents of Internet freedom.
- Even if you count ZeroSSL and Let's Encrypt as US-controlled, there is at least one free non-US alternative using the same protocol, i.e. suitable as a drop-in replacement (https://www.actalis.com/subscription).
- Plenty of other paid but affordable alternatives exist from countless countries, and the ecosystem seems to be getting better, not worse.
- While many other paths have been used to attempt to censor web sites, I haven't seen the certificate system used for this frequently (I'm sure there are individual court orders somewhere).
- If the US wanted to put its full weight behind getting a site off the Internet, it would have other levers that would be equally or more effective.
- Most Internet freedom advocates recognize that the migration to HTTPS was a really, really good thing.
- We went from the vast majority of traffic being unencrypted, allowing any passive attacker (from nation state to script kiddie sitting in the coffee shop) to snoop and any active attacker to trivially tamper with it, to all but a vanishing minority of connections being strongly encrypted.
I still don't understand why this is so terrible.
Public wifi networks were certainly a real problem, but that's not where the majority of internet usage happens, and they could have been fixed on a different layer.
If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either. Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation. And, personally, I trust the OS and browser vendors less than I trust my ISP!
Some progress is better than none, and it's still nice that my ISP can't tamper with my connection any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, I trust these parties comparatively less than my ISP.
- We went from TLS certificates being unaffordable to hobbyists to TLS certificates being not only free, but trivial to automatically obtain.
Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Please help me understand what I'm missing because I find this really frustrating!
Some progress is better than none, and it's still nice that my ISP can't snoop on me any more. Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing, and as I've said previously, i trust these parties comparatively less than my ISP.
It might be more correct to say that Certificate Pinning made it so you can't inspect your own traffic - for sites with TLS but without certificate pinning, you can just as easily create your own root certificate and force the browser and OS to trust the cert by installing it at the OS level. This is (part of, atleast) how tools like Fiddler and Charles Proxy allow you to inspect HTTPS traffic, the other part being a mitm proxy that replaces the server's actual cert with one the mitm proxy generates[0]
[0] https://www.charlesproxy.com/documentation/proxying/ssl-prox...
Edit: To be clear, I'm not even suggesting the software would be doing this maliciously! Apps do all sorts of weird things when you try to proxy them, I know this because I do run most of my traffic through a proxy (for non-privacy reasons). Just for example, QUIC gets disabled.
If you're on a traditional home internet connection, who exactly can tamper with your traffic? Your ISP can, and that's not great, but it doesn't strike me as blaring siren levels of terrible, either.
This characterization in on the same level of sophistication as "the Internet is just a series of pipes". Every transit station has the opportunity to read or even tamper with the bytes on an unencrypted http connection. That's not just your ISP, it also includes the ISP's backbone provider, the backbone peering provider, your country's Internet Exchange, the Internet Exchange in the country of the website, the website's peering partner, and the website's hosting partner.
Some of those parties may be the same, and some parties I have not mentioned for brevity. To take just one example: there is only one direct link between Europe and South America. Most traffic between those continents goes via Amsterdam (NL) and New Jersey (US) to Barranquilla (CO), or via Sines (PT) to Fortaleza (BR). Or if the packets are feeling adventurous today, they might go through Italy, Singapore, California and Chile, with optional transit layovers in Saudi Arabia, Pakistan, Thailand or China.
Main point being: as a user, you have no control over the routing of your Internet traffic. The traffic also doesn't follow geographic rules, they follow peering cost. You can't even be sure that traffic between you and a website in your country stays inside that country.
In practice this means you have to consider the possibility that anyone on the entire internet can inspect your traffic. Traffic from your home in Seattle to Google's west coast data center? For all you know it could be going via Moscow.
Your ISP can
And already has! ISPs used to inject ads into unencrypted connections: https://www.infoworld.com/article/2241797/code-injection-new...
I still don't understand why this is so terrible.
While I don't really have a scary threat model, I don't love the idea that my ISP could have been watching my traffic. Maybe there's a world where my government has ordered ISPs to log specifics about traffic in order to trap dissidents doing things they don't like. But sure, I live in the US, which isn't (yet) an authoritarian nightmare (yet!). But maybe I live in Texas, and I'm searching for information about getting an abortion (illegal to have one there in most cases). Maybe I'm a schoolteacher in Florida, and I'm searching information on critical race theory (a topic banned from instruction in Florida schools). I want that traffic to be private.
Even with HTTPS, the companies behind my OS and web browser can still see everything I do, so in exchange for all this work we've removed maybe 1 out of 3 parties from the equation
I mean, that's on you for using a proprietary OS owned by a for-profit corporation. I get that desktop Linux or a de-Googled Android phone isn't for everyone, but those are options you have, if you're really worried.
And there are quite a few major browsers that are open source, so even if you can't inspect their traffic at runtime, if you really are truly serious about this, you can audit their source code and do your own builds. Yes, I would consider that unnecessarily paranoid, but the option is there for you, and you can even run these browsers on proprietary OSes. And honestly, I assume you use Chrome anyway; if that's the case then you clearly are not serious about this if you're using a web browser made by an advertising company. (If you're using something else: awesome, and apologies for the bad assumption.)
Unfortunate, TLS also took away my ability to inspect my own traffic! This makes it more difficult for me to monitor what my OS and browser vendor are doing
You can still do this, but it does require more work setting up your own CA and installing it as trusted in your own devices, and them MitM'ing your traffic at the router in order to present a cert from your CA before forwarding the connection on to the real site.
Yes, this is out of reach for the average home internet user, but if you are the kind of person who is thinking about doing traffic monitoring on your home network, then you have the skills to do this. Meanwhile, the other 99% of us get better privacy online; I think that's a perfectly fine trade off.
and as I've said previously, I trust [my OS and browser vendor] comparatively less than my ISP.
My ISP is Comcast; even if my OS and browser vendor was Microsoft or Apple, I think I'd probably still trust Comcast less. Fortunately my OS and browser vendors are not Microsoft or Apple, so I don't have to worry about that, but still.
Sure, but it's also trivial to just throw up a website on Github Pages, or forgo the website completely and use Instagram. TLS is "trivial" if you rely the infrastructure of a specific external party.
Running a website, even from your home internet connection, still means relying on the infrastructure of a third party. There's no way to get away from that.
And you still can run one without TLS. Browsers will still display unencrypted pages, though I'll admit that I'd be unsurprised if some future versions of major browsers stopped allowing that, or made it look scary to your average user.
Please help me understand what I'm missing because I find this really frustrating!
I think what you are missing is that people actually do value connection encryption, for real reasons, not paranoid, tin-foil-hat reasons. And while you do present some valid downsides, we believe those downsides are overblown, or at the very least worth it in the trade off. It's fine for you to not agree with that trade off, which is a shame, but... that's life.
Do it and tell me you trust websites which have a green lock next to the url..
However, in a security context "takes some effort" is far better than "takes no effort".
If CAA records (with DNSSEC) were used to reject certificates from the wrong issuer, we might even be able to get to "though very imperfect, takes a considerable amount of effort".
DANE is supposed to be the solution to this problem but it's absolutely awful to use and will lead to even more fragile infrastructure than we currently have with TLS certs (and also ultimately depends on DNSSEC). HPKP was the non-DNS solution but it was removed because it suffered from an even worse form of fragility that could lock out domains for years.
- We now provide a completely free certs for a malicious web-sites
- Degraded encryption value so much it's not even indicated anymore (remember the green bar for EV?)
- Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
- SNI exists and even without it anything not on CDN is blocked very easily
Paying money doesn't actually make people trustworthy.
This is fundamentally a naive understanding of both security and certificates. Paying money absolutely makes people trustworthy because it's prohibitive to do it at scale. You might have one paid malicious certificate but you can have thousands of free ones. The one malicious domain gets banned, the thousands are whack-a-mole forever.
Further, certificates used to indicate identity in more than a "the domain you are connected to" sense. There was a big PR campaign to wreck EV certs but EV certs generally were extremely secure. And even Google, who most loudly complained about EV, has reintroduced the Verified Mark Certificate (VMC) to replace it and use for new things like BIMI.
I don't much care about BIMI. People keep trying to resuscitate that particular dead dog (email security), maybe one day they will succeed but I don't expect to be involved.
Browser vendors removed the extra UI around EV certs not because certs in general are easier to get, but because the identity "guarantee" afforded to EV certs was fairly easy to spoof. EV certs still exist, and you can foolishly pay for one if you want. Free ACME-provided certs has nothing to do with this.
Cybercriminals work at scale. The opinion you shared here is why Google, Microsoft, and Amazon are so easy to use for cybercrime. It's incredibly easy to hide bad behavior in cheap, disposable attempts on free accounts.
Cost virtually eliminates abuse. Bad actors are fronting effort and ideally small amounts of money to effectively bet on a high return. You make the cost to attempt high, it isn't worth it. Apart from some high profile blogs demonstrating the risk, EV certs have to my knowledge never been used maliciously, and hiding them from the browser bar just makes useful, high quality data about the trustworthiness of a site buried behind hidden menus.
that could also be fixed by tweaking EV requirements. (More than likely by putting a country flag on the EV banner.)
Wrong. Company names are not guaranteed to be unique per-country.
The main issue you are missing is that putting undeserved trust in things like DV / EV flags greatly increase the value of such attacks. If users are trained to blindly trust that shiny green bar, the odd attacker will be able to walk away with an absolute fortune. Nobody will be suspicious about that page, because Green Bar. Why is the "bank" asking odd questions? Who cares, it had a Green Bar, so it must be legitimate.
Why bother with hundreds of small attacks when one big one will make you rich?
Like, you need to realize most phishing scammers are perfectly happy to make a bankofamericaaa.glitch.me page, because it's free, and without any good indicators for the legitimate bank to use like EV, really doesn't look that much different to the nontechnical customer than bankofamerica.com.
Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Do we have any statistics for how many people are actually doing this? Such warnings are so rare in my experience that, by default, I don't trust a site that has no SSL/expired or invalid certs and won't click through if I see that warning.
We now provide a completely free certs for a malicious web-sites
Malicious websites never had a problem buying certs before. Sure, the bar is lower now, but I don't think it was a particularly meaningful bar before. Besides, the most common ways to get malicious websites shut down are to get their webhost to cut them off, or get a court order to seize their domain name. Getting their TLS cert revoked isn't common, and doesn't really do the job anyway.
Degraded encryption value so much it's not even indicated anymore (remember the green bar for EV?)
No, we've degraded the identity verification afforded by EV and those former browser features. Remember that the promise of SSL/TLS was two things: 1) your traffic is private, 2) it verifies that the server you thought you were contacting is actually the one you reached.
I think (2) was always going to be difficult: either you make it hard and expensive to acquire TLS certificates, and (2) has value, or you don't, and it doesn't. I think pervasive encryption is way more important than site owner identity validation. And I don't think the value of an EV cert was even all that high back when browsers called them out in their UI. There are lots of examples of people trivially managing to get an EV cert from somewhere, with their locally-registered "Stripe, LLC" or whatever in the "validated" company name field of their cert.
Pavlov-trained everyone to dumb-click through 'this page is not secure' warnings
Not sure what that has to do with this. That was more of a problem back when we didn't have Let's Encrypt, so lots of people were using self-signed certs, or let their certs expire and didn't fix it, or whatever. These days I expect certificate warnings are fairly rare, and so users might actually start paying attention to them again.
SNI exists and even without it anything not on CDN is blocked very easily
ESNI also exists, and while not being available everywhere, it'll get there. But this is a bizarre complaint, as it's entirely trivial to block traffic when there's no TLS at all.
Meanwhile, in the real world:
More than one thing can be true at the same time.
Yes, vastly increasing the traffic that is encrypted is a great thing for many reasons.
Simultaneously, it is also true that if a public CA-issued (as opposed to unknown me self signing) certificate is effectively required, that sure provides a handy hammer to shut something down. History tells us that whenever such handy hammers get built, they inevitably get abused.
However, short expirations severely limit the damage an attacker can do if they steal your private key.
And they avoid the situations where an organization simply forgets to renew a cert, because automating something so infrequent is genuinely difficult from an organizational standpoint. Employees leave, calendar reminders go missing, and yeah.
Short-lived certificates fixes these issues from an end-user standpoint.
https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
CRLite updates every 12 hours.
It'll rapidly shrink over time as certs expire, but you still have to deal with that initial massive set.
But seems like there is feasible solution: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...
You don't need short expirations for that. CRLs/OCSP already provided a mechanism for certificates to be revoked before they expire.
But that's an explicit action that's much simpler to ask questions about.
Let's Encrypt has emphasized that it doesn't have the resources to investigate content disputes (currently, it's issuing nearly 10 million certificates per day, with no human intervention for any of them) and that having to adjudicate who's entitled to have a certificate by non-automated criteria would throw the model of free-of-charge certificates into doubt.
Meanwhile, encrypting web traffic makes it harder for governments to know who is reading or saying what. (Not always impossible, just harder.) Without it, we could have phenomena like keyword searches over Internet traffic in order to instantly determine who's searching for or otherwise reading or writing specific terms!
I'm very aware that it's still easy to observe who visits a particular site (based on SNI, as someone else mentioned in this thread). But there's a chicken-and-egg problem for protecting that information, and encrypting the actual site traffic is at least the chicken, while the egg may be coming with ECH.
Overall, transit encryption is very good for free expression online, and people who want to undermine or limit online speech are much more likely to be trying to undermine encryption than to promote it.
The biggest thing that Let's Encrypt in particular does to mitigate the risk of being unable to serve particular subscribers is to ensure that ACME is an open protocol that can be implemented by different CAs, and that it's very easy for subscribers to switch CAs at any time for any reason. The certificate system is more centralized than many people involved with it would prefer, but at least it's avoiding vendor lock-in.
I don‘t see any disadvantages over automatically issued certificates.
(2) It's much slower than the TLS WebPKI.
(3) There's no transparency log and never will be, both because it hasn't been designed and because Google and Mozilla basically had to mug the WebPKI CA's in a dark alley to make CT happen.
(4) It requires you to set up DNSSEC, which is so error prone that some of the largest engineering teams in the world have managed to take their sites (and also countries) offline.
The thing about DNSSEC is that once you turn it on, it's a king-hell pain in the ass to turn it off without incurring an outage. DNS providers know this, and they know ops teams are terrified of DNSSEC, so they push it as a kind of account lock-in tool. There's a reason that the overwhelming majority of large site don't use it.
https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...
With certificates we’re doing multi perspective validation.
DNS root of trust is silly. DNSSEC is not a proper root of trust
If your domain register or DNS provider is compromised in any way, all of the bullcrud the CA/B demands of certificates is entirely meaningless, the bad actor can legitimately request certificates.
But think about what DANE is for a second. If a bad actor is MITMing your connection to some endpoint, they certainly can MITM your DNS queries too.
DANE isn't going to be of any value when an attacker is sitting between the end user and their ISP - which was already the requirement for compromising the TLS connection in the first place - as they could just strip DNSSEC and fake the DANE records.
It's weak security and introduces more problems than it solves. If we're going to get rid of CAs, we should consider a better solution, not a worse one.
What's weird is that the major registrars never even tried to enter the PKI business. It would have made sense. It would even have hastened the adoption of much needed TLS extensions.
- A CA validates requests, signs CSRs, publishes cert revocation, issues certificates and trust anchors.
- A registrar in DANE merely passes a DS record you created to the TLD, along with the promise that this record was created by the domain zone owner. It's basically the validation step. Nothing to do with establishing or securing data, key/record management, etc; they're a glorified FTP tool.
I'm in favor of registrars getting more involved (since they are the authority on who controls a domain), but only with a completely different design. I have suggested many times that CAs establish an API to communicate directly with Registrars to perform the validation step, as this would eliminate 95% of attacks on Web PKI without introducing any downsides. So far my pleas have fallen on deaf ears. And since the oligopoly of browser vendors continue their attacks on system reliability (via ridiculous expiration times) without any real pushback, I don't see it changing.
The model you suggest is a variant of what I allude to in my second paragraph above. That is indeed both an obvious, simple, and much more secure model than the web PKI we use today. I tried to push for similar ideas several years before things like DANE but no one seems interested enough. I have no idea why this is, as the model is both trivial and obvious.
Nevermind. Found it.
https://fazlerabbi37.github.io/blogs/fdroidcl.html
Couldnt find in ddg though.
They acknowledge rotation failed but it is still failing[1]. Perhaps something to do with how certs are rotated on their CDN?
[1]https://www.ssllabs.com/ssltest/analyze.html?d=f%2ddroid.org...
API changes
This should be an oxymoron. We've forgotten the point of an API as a profession and it's downright shameful when something this important breaks needlessly. Would it have been that hard to just keep supporting whatever API calls were in existence as e.g. "v1" and put their new stuff in "v2"?