Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

67% Positive

Analyzed from 858 words in the discussion.

Trending Topics

#github#npm#dns#azure#https#com#packages#down#cache#gitlab

Discussion (55 Comments)Read Original on HackerNews

iLemming•about 3 hours ago
First GitHub, now NPM? Oh no... That is happening, guys. Rise of the machines. I hope Jira is next and Slack follows.
corvad•about 4 hours ago
I wonder if this is an underlying infra issue with Azure being that Github was also having issues.
nulltrace•about 4 hours ago
We added a preflight curl against registry.npmjs.org before the install step in CI. Not surprising they went down together.
2ndorderthought•about 4 hours ago
I bet 10 dollars it's DNS.
thanatos_dem•about 4 hours ago
Nah, can't be, Azure DNS has a 100% SLA after all: https://learn.microsoft.com/en-us/azure/dns/dns-faq#what-is-...
shakna•about 3 hours ago
"Always" up, but maybe not going where you expect. [0]

[0] https://arstechnica.com/information-technology/2026/01/odd-a...

parliament32•about 3 hours ago
To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.
yomismoaqui•about 4 hours ago

  It's not DNS
  There's no way it's DNS
  It was DNS
- SSBroski
corvad•about 4 hours ago
Just wait and it will be something like "Github's internal DNS was down and caused widespread service communication issues."
xaxfixho•about 4 hours ago
it might just be *AZURE*
Imustaskforhelp•about 4 hours ago
I am waiting for jeff geerling's "its always dns" t-shirt reference/video about it if that's the case.
Scipio_Afri•about 4 hours ago
Easy there buddy, not everything needs to be a polymarket bet :-)
munk-a•about 4 hours ago
It's likely someone just ran npm ls -all
airstrike•about 4 hours ago
Raed667•about 4 hours ago
lots of amazon pages & search seem to be degraded as well
cozzyd•about 4 hours ago
That's one way to fix supply chain vulnerabilities.
tantalor•about 4 hours ago
Can't have any vulnerabilities if you don't have a supply chain
nine_k•about 4 hours ago
More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.

Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.

(Same logic obviously applies to Python packages, Docker images, etc.)

hmokiguess•about 3 hours ago
At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions

I took that for granted back then and just assumed it was standard enterprise policy

jamesfinlayson•44 minutes ago
Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.
spartanatreyu•about 1 hour ago
> a local artifact storage for internal npm packages looks like a wise thing to have done long ago

Deno already does this invisibly by default.

All packages are stored in the global cache.

No need to store multiple versions of the same dependencies across projects.

To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.

miohtama•about 4 hours ago
Only if we had a turn key distributed cache, like IPFS
ibejoeb•about 3 hours ago
Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.
cluckindan•about 4 hours ago
Waiting for the BitTorrent package manager
XorNot•about 4 hours ago
Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.
nine_k•about 2 hours ago
Sending a bazillion http requests within your LAN, or at least your VPC, is much easier, faster, and cheaper.

Both yarn and pnpm support http/2 which speeds up the bazillion requests quite a bit.

hexasquid•about 4 hours ago
Hold the jokes until we're sure this isn't an `.unwrap()`
lrvick•about 2 hours ago
Whenever NPM is offline, the internet is a little safer.

Keep up the good work Microsoft.

Let's shoot for 100% downtime though. Thanks.

normie3000•about 4 hours ago
Well it is owned by github.
cute_boi•about 4 hours ago
which is owned by microslop
rvz•about 4 hours ago
...and proudly maintained by Microsoft's AI agents: Tay.ai, Zo, and Copilot.

They seem to be doing a pretty good job at wrecking both GitHub and npm at the same time.

adxl•about 1 hour ago
Clippy was too stupid to qualify as an AI.
squarefoot•about 4 hours ago
corvad•about 4 hours ago
Fixed as of 22:30 UTC. Hope there's a postmortem.
saadn92•about 4 hours ago
ha, github is down too
dabinat•about 4 hours ago
Advertisement
idoxer•about 4 hours ago
Works for me, could be region related
simjnd•about 4 hours ago
xmprt•about 4 hours ago
With all the github instability, I wonder if Cloudflare or some other provider is going to look into providing a similar service.
dllrr•about 3 hours ago
xmprt•about 2 hours ago
I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.
sofixa•about 4 hours ago
GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:

* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces

* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.

justinclift•about 4 hours ago
GitLab also limits the size of PRs/MRs, which makes it Unfit for Purpose. :( :( :(

Its a problem they know about, but have no plan to fix before 2027.

irishcoffee•about 3 hours ago
I mean, the PR limit is like a million characters. I would also reject a PR of a million characters. That’s bananas.
fontain•about 4 hours ago
All of those features are supported by GitHub in some form, e.g: Organizations can now belong to Enterprises.
dijksterhuis•about 3 hours ago
tree based directory structure stuff is available on gitlab’s free tier — so are all the permissions inheritance for groups etc.

so, while you’re technically right, these features are apparently paywalled heavily on github.

ime you get more features on gitlab for the same price (or less). i switched fully two years ago and im not going back.

dmitrygr•about 3 hours ago
libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?
naikrovek•about 4 hours ago
Oh no. At least nothing of value is affected.

:)

cute_boi•about 4 hours ago
microslop slops are down.
12345hn6789•about 4 hours ago
Azure is completely dead across multiple resources. Confirming....
DaiPlusPlus•about 4 hours ago
https://azure.status.microsoft/en-US/status says "There are currently no active events." - and everything's fine with my day-job's Azure sub right now.