FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
48% Positive
Analyzed from 1780 words in the discussion.
Trending Topics
#openssl#ech#support#https#those#more#client#don#sni#since

Discussion (42 Comments)Read Original on HackerNews
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.
"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."
"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."
But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there
I don't quite follow. I have dozens of throw-away silly hobby domains. I can use any of them as the outer-SNI. How is someone observing the traffic going to know the inner-SNI domain unless someone builds a massive database of all known inner+outer combinations which can be changed on a whim? ECH requires DOH so unless the ISP has tricked the user into using their DOH end-point they can't see the HTTPS resource record.
[1] - https://news.ycombinator.com/item?id=47770007
So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.
What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.
But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.
ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).
But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.
Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.
I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome and may run into the same problems as blocking CDN IP addresses.
They were wrong then, of course, and they're still wrong now.
Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.
Big sites care about money more than your privacy, and forcing ECH is bad business.
And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.
The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.
:)
The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...
Here are some juicy quotes:
> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.
> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.
> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.
Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.
According to this one should not be using v3 at all..
For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.
QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.
Daniel Stenberg from Curl wrote a great blog post about how bad and dumb that was if anyone is interested. https://daniel.haxx.se/blog/2026/01/17/more-http-3-focus-one...
On the one hand, looks like decent cleanup. (IIRC, engines in particular will not be missed).
On the other hand, breaking compatibility is always a tradeoff, and I still remember 3.x being... not universally loved.
From what I remember hearing, the move from 2 to 3 was hard.
But, thousand yard stare it was the version for the FIPS patches to 1.0.2.