Advertisement
Advertisement
β‘ Community Insights
Discussion Sentiment
27% Positive
Analyzed from 4350 words in the discussion.
Trending Topics
#dnssec#https#denic#dns#com#domains#down#domain#single#maxchernoff
Discussion Sentiment
Analyzed from 4350 words in the discussion.
Trending Topics
Discussion (215 Comments)Read Original on HackerNews
RRSIG with malformed signature found for a0d5d1p51kijsevll74k523htmq406bk.de/nsec3 (keytag=33834) dig +cd amazon.de @8.8.8.8 works, dig amazon.de @a.nic.de works. Zone data is intact, DENIC just published an RRSIG over an NSEC3 record that doesn't validate against ZSK 33834. Every validating resolver therefore refuses to answer.
Intermittency fits anycast: some [a-n].nic.de instances still serve the previous (good) signatures, so retries occasionally land on a healthy auth. Per DENIC's FAQ the .de ZSK rotates every 5 weeks via pre-publish, so this smells like a botched rollover.
Still, at this level, brittle infrastructure is a political risk. The internet's famous "routing around damage" isn't quite working here. Should make for an interesting post mortem.
Building redundant infrastructure that can withstand BGP and DNS configuration mistakes are not that simple but it can be done.
Think about what would happen the day that letsencrypt is borken for whatever reason technical or like having a retarded US leader and being located in the wrong country. Taken into account the push of letsencrypt with major web browsers to restrict certificate validities for short periods like only a few days...
If Let's Encrypt goes down, half of the Internet will become inaccessible in a week.
And fuck nothing at all happened as a result.
I had the misfortune of having to dig deep into constructing ASN.1 payloads by hand [1] because that's the only thing Java speaks, and oh holy hell is this A MESS because OF COURSE there's two ways to encode a bunch of bytes (BIT STRING vs OCTET STRING) and encoding ed25519 keys uses BOTH [2].
And ed25519 is a mess in itself. The more-or-less standard implementation by orlp [3] is almost completely lacking any comments explaining what is going on where and reading the relevant RFCs alone doesn't help, it's probably only understandable by reading a 500 pages math paper.
It's almost as if cryptographers have zero interest in interested random people to join the field.
End of rant.
[1] https://github.com/msmuenchen/meshcore-packets-java/blob/mai...
[2] https://datatracker.ietf.org/doc/html/rfc8410#appendix-A
[3] https://github.com/orlp/ed25519/tree/master
https://sockpuppet.org/blog/2015/01/15/against-dnssec/
The ".de" TLD is inherently managed by a single organization, and things wouldn't be much better if its nameservers went down. Some of the records would be cached by downstream resolvers, but not all of them, and not for very long.
> we took the decentralized platform DNS was and added a single-point-of-failure certificate layer on top of it
DNSSEC actually makes DNS more decentralized: without DNSSEC, the only way to guarantee a trustworthy response is to directly ask the authoritative nameservers. But with DNSSEC, you can query third-party caching resolvers and still be able to trust the response because only a legitimate answer will have a valid signature.
Similarly, without DNSSEC, a domain owner needs to absolutely trust its authoritative nameservers, since they can trivially forge trusted results. But with DNSSEC, you don't need to trust your authoritative nameservers nearly as much [0], meaning that you can safely host some of them with third-parties.
[0]: https://news.ycombinator.com/item?id=47409728
but how would one verify the signature if the DNSKEY expired and you cannot fetch a fresh one because the organisation providing those keys is down? As far as I understood the TTL for those keys is different and for DENIC it seems to be 1h [0]. So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
[0] dig RRSIG de. @8.8.8.8
de. 3600 IN RRSIG DNSKEY 8 1 3600 20260519214514 20260505201514 26755 de. [...]
In theory, this shouldn't happen, because if you use the same TTLs for your DNSSEC records and your "regular" records, then if the regular records are present in the cache, the DNSSEC records will be too.
> So if they are down for more than an hour and all RRSIG caches expire, DNS zones which have a higher TTL than 1h but use DNSSEC would also be down?
Yes, but I'd argue that the DNSSEC records should have the same TTLs for exactly this reason. That's how my domain is set up at least:
A list of root nameserver IP addresses is included with every local recursive DNS resolver. The list changes, albeit slowly, over the years. With DNSSEC, this list also includes public keys of those root servers, which also rotate, slowly.
So what the issue is, that the operator has, does not change the impact.
The world might be a little bit better with more decentralization of the root zone.
https://archive.nytimes.com/www.nytimes.com/library/cyber/we...
Fun fact, CloudFlare has used the same KSK for zones it serves more than a decade now.
Edit: Alternative link: https://www.cyberciti.biz/media/new/cms/2017/04/dns.jpg
Or: https://dns.kitchen/jingle
It's been like that for over two years now.
EDIT: it says "Service Disruption" now
Edit: Now even the humor is gone.
EDIT: called it...
Good news though, if you add domain-insecure: "de" to your unbound config everything works fine
"Cloudflare Radar data shows 8.11% of domains are signed with DNSSEC, but only 0.47% of queries are validated end-to-end." [1]
Zones I may care about:
- Amazon.com: unsigned
- My banks: unsigned
- Hacker News: unsigned
- Email that I do not host: unsigned
- My power companies billing: unsigned
- I found some! id.me and irs.gov are signed.
[1] - https://technologychecker.io/blog/dnssec-adoption
DNSSEC not working
If using an open resolver, i.e., a shared DNS cache, e.g., third party DNS service such as Google, Cloudflare, etc., then it might fail, or it might not. It depends on the third party DNS provider
https://datatracker.ietf.org/meeting/118/materials/slides-11...
I'm really not too close to Denic and know nothing about their internals, but just close enough to have experienced the frustration of someone working for DENIC second hand, but in person, during the disaster. From the very limited information I happened to gather DENIC had some trouble in addressing the issue because, surprise, infrastructure that they need to do so runs on de domains. [1]
I'm convinced there are all kinds of extended cyclic decencies between different centralization points in the net.
If some important backbone of the internet is down for an extended time, this will absolutely cause cascading failures. And thesw central points of failure are only getting worse. I love Let's Encrypt, but if something causes them to hard fail things will go really bad once certificates start to expire.
We need concrete plans to cold start extended parts of the internet. If things go really bad once and communication lines start to fail, we're in for a bad time.
Maybe governments have redundant, ultra resistant, low tech communication lines, war rooms and a list of important people in the industry who they can find and put in these war rooms so they can coordinate the rebuild of infrastructure. But I doubt it.
[^1] I don't know if there is some kind of disaster plan in the drawer at DENIC that would address this. I don't mean to allege anything against DENIC specifically, but broadly speaking about companies and infrastructure providers, I would not be surprised if there was absolutely no plan on what to do if things really go down and how to cold start cyclic dependencies or where they even are.
DENIC's status page currently says "Frankfurt am Main, 5 May 2026 β DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENICβs technical teams are working intensively on analysis and on restoring stable operations as quickly as possible.
The only problem with DNSSEC here is that it's complex.
[Edit] After playing around with it, google seems to have at least some pages cached. After setting dns to 8.8.8.8 amazon.de and spiegel.de work again, my blog does not.
yes indeed
Looks like it failed after a maintenance: https://www.namecheap.com/status-updates/planned-denic-de-re...
https://status.denic.de/
-> no idea if that also "heals" anyone who had dnssec on before.
-> no idea if maybe they need to roll back something and then rebreak the new dnssec i made a minute later lol...
https://dnssec-analyzer.verisignlabs.com/nic.de
I am very happy that it doesn't happen more often.
EDIT My explanation was wrong, this is not how keytags work. The published keytag data is consistent:
The signature on the SOA record still does not verify:As fallback they should use their X account: https://x.com/denic_de
May 5, 2026 23:28 CEST
May 5, 2026 21:28 UTC
INVESTIGATING
Frankfurt am Main, 5 May 2026 β DENIC eG is currently experiencing a disruption in its DNS service for .de domains. As a result, all DNSSEC-signed .de domains are currently affected in their reachability. The root cause of the disruption has not yet been fully identified. DENICβs technical teams are working intensively on analysis and on restoring stable operations as quickly as possible. Based on current information, users and operators of .de domains may experience impairments in domain resolution. Further updates will be provided as soon as reliable findings on the cause and recovery are available. DENIC asks all affected parties for their understanding. For further enquiries, DENIC can be contacted via the usual channels.
Thereβs no way itβs DNS
It was DNSSEC
We observed issues on a non-DNSSEC .de domain at 19:45Z and confirmed around 20:12Z it wasn't just us, but also more high profile domain names.
There are too many coincidences happening.
Fundamentally, security is a solution to an availability problem: The desire of the users is for a system to remain available despite external attack.
Systems that become unavailable to everyone fail this requirement.
A door with its keyhole welded shut is not "secure", it's broken.
If Iβm unable to use Amazon for 24 hours it doesnβt really matter. If a photo copy of my passport is leaked thatβs worries and potential troubles for years.
or alternatively,
Security = (exclude unauth'd reads) + (exclude unauth'd writes) + (include auth'd reads and auth'd writes)
Gotta satisfy all parts in order to have security.
E.g.: What's the benefit to you if your data is so confidential that you can't read it either? This is a real problem with some health information systems, where I can't access my own health records! Ditto with many government bureaucracies that keep my records safe and secure from me.
Bad UX and bugs are in general not always an availiability problem.
If it hard to get what you want due to bad design but the site is up, the site is still up.
Non-authoritative answer: Name: bmw.de Address: 160.46.226.165
$ nslookup www.bmw.de ~ ;; Got SERVFAIL reply from 8.8.8.8, trying next server Server: 8.8.4.4 Address: 8.8.4.4#53
* server can't find www.bmw.de: SERVFAIL
https://edition.cnn.com/2026/05/01/politics/us-troop-withdra...