DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
51% Positive
Analyzed from 5177 words in the discussion.
Trending Topics
#github#https#azure#more#scale#issues#don#data#com#cloud

Discussion (155 Comments)Read Original on HackerNews
> GitHub Will Prioritize Migrating to Azure Over Feature Development - GitHub is working on migrating all of its infrastructure to Azure, even though this means it'll have to delay some feature development.
> In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
So the currently delayed feature development is now gonna be further delayed, yet almost every week we see new features and changes, just the other day the single issues view was changed, as just one example. And it was "existential" 6 months ago yet they keep stumbling on the exact same issue today?
Even if they're focused exclusively on reliability and uptime, we get the experience that we have today, kind of incredible how a company with the resources of Microsoft seemingly are unable to stop continuously shot themselves in the foot. It's kind of impressive actually. As icing on the cake, they've decided to buy up all popular developer services then migrate them all to the same platform, great idea too.
I still think the rest of my point stands, especially the last one which is the move that has the biggest impact to the most of us developers.
They did that as a panic mode hack to mitigate performance: https://news.ycombinator.com/item?id=47912521
But that doesn't matter because the kind of person that buys Azure, just like the kind of person that buys MS Teams, is entirely driven by price and does not care about anything else.
The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....
You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.
Any company that experiences significantly more growth than they were planning for will have capacity issues.
The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.
We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...
No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.
The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.
> But given the rest of the content, I think that is safe to assume.
The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"
What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?
I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.
But if anything, their post and your reply are precisely an endorsement of usage based billing.
The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.
GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.
Where there are limits - or usage based billing - people add guardrails and find optimizations.
Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.
Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.
You know, you can just host your own code forge. Or you can just drop gitolite on a server. Or pull directly from each others' dev machines on a LAN.
GitHub is not git.
so start a GitHub competitor which bills $50/dev/yr for solving this easy problem and make a lot of money?
> What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale
I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".
I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).
Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.
The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).
It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.
As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.
You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.
Is this microsoft stating that they aren't able to get acceptable reliability from Azure? (I mean, I think a lot of us have heard that, but it's interesting to hear it from microsoft themselves).
I guess most people at Github knew exactly it makes no sense but they didn't really have a choice. Maybe some voiced their statement, got "we hear you" in response and were told to proceed anyway.
Then it's up to Azure how they will manage this
While Azure feels like Temu clone of Cloud
Prime video does use some AWS services, but live and on-demand are two entirely different beasts.
There's no intrinsic reason they should be vulnerable to themselves.
But Github don't have that rationale.
Since yesterday, me and several colleagues noticed that the pull request lists on the website are incomplete, across many repositories. For example, on https://github.com/gap-system/gap/pulls it says "Pull requests 78" in the "tab list", but the PR list view reports "35 open" (the number 78 is correct, and confirmed by e.g. `gh pr list`)
And that despite <https://www.githubstatus.com> reporting "all systems operational".
Their support acknowledged the issue, but has been silent since then, and the status page still shows nothing other than the potentially-related issue on the 27th. It looks like it has been resolved on some repositories in the meantime, but I still have the issue across multiple orgs and repositories.
https://github.com/orgs/community/discussions/193388
Surely a scaling hack where they use "estimation" queries that return "kind of right" results instead of 100% correct data, as it's less load on the infrastructure. Not necessarily a bug as much a shit choice from product perspective.
Sorry, but I don't think there is any way this can be classified as "not actually a bug"
Stop subsidizing tokens now that we extracted enough training data from you and we have enough agentic junkies business to keep the flywheel going up and cut on the loss leaders. [0]
[0] https://news.ycombinator.com/item?id=47923357
Looking at the commit graph: Why do commits have big steps followed by slow rolloffs? Why do the steps not happen at uniform points Why do larger steps sometimes have less of a slope than smaller steps but not all the time?
Then looking at the other graphs there's completely different effects going on.
GitHub is claiming they require 30x scale due to the giant increase in repository creation, PRs, commits, etc.
I have not seen a single product increase in features or quality as an end user, nor new significant products have come out in this period (other than the LLMs themselves).
Where is all this code going?
What I’m not seeing here but I am seeing with the Linux kernel is, most of the automatically submitted code is irrelevant or not useful
(Maybe that’s what you were getting at, apologies)
Half of my friends is vibe-coding something but they can barely get the rest of the group chat to use it once.
In companies, I see people vibe-coding "miracle apps" that fall under the smallest amount of scrutiny.
Basically people are doing the same developers do when they say "I can do this in a weekend", which is getting a prototype sort of running and then immediately losing energy (or in this case lacking ability) to push it forward.
Some people I know can't even explain what they are trying to create.
Global indices for this should be trivial to spin up so availability is never a concern (we're working towards this!).
If I could get the same bells and whistles by wiring up another forge, so long as it offered a decent API and/or sent events over a webhook, I'd have everything self-hosted.
The agents would need to expose an interface on their own end but as long as you implemented it with a plugin, it'd take the dependency of GitHub and you could use MCP or skills for the rest of it.
Which is to say, this is perfect for agents given they don't need any bespoke SDK from us: simply write Tangled records for issues, pulls, whatever to your PDS and it'll show up on Tangled. We plan to start working on some exemplar agents first-party that would 1. enhance Tangled itself, 2. showcase cool things you can do with an open data firehose.
Disclaimer: the author is a colleague of mine
Though to be fair, what the parent meant by federated forges is different than this approach.
https://stackoverflow.com/questions/849308/how-can-i-pull-pu...
I'd say we have emails, mailing lists and bug trackers. Or maybe: what is the missing killer feature that needs federation?
I recently migrated to codeberg because I'm okay with self-hosting big runners, while using codeberg's available runners for smaller cron-based things (they even have lazy runners for this).
The internet should not be centralised, but you can't make a billion dollar company without capturing the world and selling your company to a trillion dollar company
Point is: This discussion is much more multi-dimensional than some suggest.
I think I found the issue.
In seriousness, looking at their scale, this is an insane engineering challenge.
Especially if they’re moving databases, not easy ever, and certainly not at that scale
amazing on one hand, quite scary on the other for github and all other forges if this continues and there is no reason why it wouldn't.
and that azure cannot scale fast enough to handle the load so they're embracing multi-cloud as a company... owned by microsoft?
woah. what am I reading.
The unlabeled graphs don't help the credibility case. When you are already in the hole on trust, shipping a post that requires readers to assume favorable baselines is exactly the wrong move.
Leopard, meet face.
Too little too late, yesterday was the straw that broke the camel’s back for us and we’ve started a migration to a self-hosted GitLab.
I was thinking of maybe doing a proper write up about how to host your own Forgejo + Action runners on Linux, Windows and macOS, not sure if there is enough interest. What would people for sure want to know in a guide/explanation of this?
The only repos I left on GitHub are forks and one with a bit of public engagement.
* we had to resolve a variety of bottlenecks that appeared faster than expected from moving webhooks to a different backend (out of MySQL)
* * redesigning user session cache to redoing authentication and authorization flows to substantially reduce database load.
* we accelerated parts of migrating performance or scale sensitive code out of Ruby monolith into Go.
I'd like to know what database backend they migrated to. I was also surprised to read that the migration from Ruby to a more performant language had not already been completed. I assume this is because it a large code base with many moving parts, etc.
GitHub instability has started way before that. I understand it’s too much to ask of a trillion-dollar corporation to consider the impact of their own actions, but perhaps they should’ve thought of that before forcing LLM development down everyone’s throats.
They started the trend with Copilot.
> If they weren't letting folks use it directly
There is a chasm of difference between “letting you use it” and “forcing it down your throat”. Microsoft is doing the latter, not the former. Copilot is annoyingly present by default at every step on GitHub.
I understand the rapid growth (because of AI agents), but if such critical software service becomes unstable then it's time to migrate? Thinking about self-hosting GitLab.
Right way to think about this:
> If things we need/see as critical for our work are hosted on a platform with really bad reliability, it's time for us to migrate
My internet connection at home is really shit, and almost every week there is a multi-hour downtime for some reason, not to mention when La Liga games are on TV anything using Cloudflare is unavailable, so I've had to spend extra energy and time to setup things in a way so I can still work whenever this happens.
Status page is also still doing that thing where every component is green but in practice clone is hanging, push is timing out, actions are stuck. Per-service uptime is a managed number. The user-experience number is the one that matters and it's not in the post-mortem.
I feel like this would have negative impacts (lots of interesting historical archives on Github) but maybe if a project hasn't been touched, or cloned, in some time, it just gets deleted with some notice.
Wild
> availability first, then capacity, then new features.
I'd love to experience first-hand a leadership team who says, "stop accepting new paying customers until we've got availability sorted out!"
> New sign-ups for GitHub Copilot Pro, Pro+, and Student plans are paused. Pausing sign-ups allows us to serve existing customers more effectively.
I’m sure survivor bias is at play here, but when I look through the older code bases - especially the data model - it’s an entirely different world than the newer stuff, and it’s clear which of the two was written by people who understand systems.
on another note - is the exponential growth from 'agentic' workflows actually resulting in productive software in the wild. Or it is just noise. On my end I haven't seen the software I use getting better.
[0] https://docs.codeberg.org/getting-started/faq/#how-about-pri...
If you multiply all current numbers together (as of Apr 28), you find out that GitHub has a 97.26% uptime.
One ... single ... 9.
They can do better.
> you find out that GitHub has a 97.26% uptime
Calculating that to "Downtime per day" you get ~40 minutes of downtime per day, almost a week per year. Crazy stuff for something essential like this.
That's a delayed April fool's right?
The user (and not a big tech monopoly) answer to scaling issues is almost always to stop scaling and start federating and interoperating.
I am surprised that Microsoft is allowed to use Go. How long will it be before a bean counter forces a rewrite to a Microsoft favored language?
are there big conceptual serialisations that I've missed? is it just not well factored? was the move to Azure just a catastrophically bad idea? some other thing?
Even as recently as 18 months ago, Lovable appeared, seemingly overnight, and caused huge problems for GitHub because they were creating repositories on GitHub for every single Lovable project, offloading the very high cost onto GitHub, hundreds of thousands of repositories. A couple of years before that, Homebrew used GitHub as a de facto CDN and that was a huge problem, too.
Nowadays it is easy to imagine how we can scale out a service like Twitter or YouTube or Facebook because everything has been done before, but that's not true of Git, Git hasn't ever scaled like this before, there are very few examples of service with GitHub's characteristics.
https://lovable.dev/blog/incident-github-outage
https://news.ycombinator.com/item?id=42659111
> To summarize, for every v1 diff line there would be:
> - Minimum of 10-15 DOM tree elements
> - Minimum of 8-13 React Components
> - Minimum of 20 React Event Handlers
> - Lots of small re-usable React Components
https://github.blog/engineering/architecture-optimization/th...
Good chuckle out of this post, it’s crazy that neither Atlassian (Bitbucket) or Gitlab are capturing value out of this same agentic coding boom. I wish github was separately publicly traded outside of Microsoft.
Nowhere to get exposure to this