ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
66% Positive
Analyzed from 6293 words in the discussion.
Trending Topics
#more#years#data#investment#railroads#gpus#project#roi#still#bubble

Discussion (280 Comments)Read Original on HackerNews
https://x.com/paulg/status/2045120274551423142
Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
RS-25 - It was designed as HG-3 during the 60s for Saturn-V and manufactured for the Space Shuttle and refurbished for SLS and just launched last month.
Vehicle assembly building - Built for Saturn-V launches been in active use and continues today .
Crawler-transporters - Hanz and Franz were built in 1966 for Apollo and still used for launches.
There are plenty of other examples from Apollo program of actual hardware being repurposed and used for later missions.
In other mega space projects, Hubble is still doing active research, 35 years after launch, voyager is sending data close to 50 years later.
It is a whole another topic whether they should be used, how NASA is funded , and this is why makes programs like SLS or the shuttle are so expensive and so forth.
The point is these mega projects had a long lifetime of value, albeit with higher maintenance costs for the tech heavy ones like Apollo than say a bridge or a dam does.
We're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
Imagine this world: the bubble "pops" in a couple years. The GPUs stick around for a few more years after that. At the end, we pretty much don't train new foundation models anymore - no one wants to spend the money on the hardware needed to make a real advance.
People continue to refine, distill, and optimize the existing foundation models for the next century or two, just like people keep laying new track over old railway right of ways.
That said, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.
I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other on a % of GDP basis until the history actually plays out.
If you compare global investment in nuclear weapons it would dwarf the manhattan project and AI thus far, and yet, 99.99999% of nuclear weapons investment is just "wasted" capacity in that it has never been "used." But the value it has created in other ways (MAD-enabled peace) has surely been profitable on net. Nobody would have predicted this at the time.
Playing armchair internet pessimist about the "new thing" always makes you feel smart but is usually not a good idea since you always mis-price what you don't know about the future (which is almost everything).
What other uses do GPU's have that are critical...? lol
In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
GPUs are essential to every kind of scientific and engineering simulation you can think of. AI-accelerated simulations are a huge deal now.
https://news.ycombinator.com/item?id=44805979
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
Until we run out of materials
https://mwi.westpoint.edu/minerals-magnets-and-military-capa...
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
Can you explain that? I really have no idea what you are referring to?
The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.
I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.
(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
I think they have been gaslighting us from the beginning.
Like Madoff, they’re desperate to pump their Ponzi scheme for as long as they can.
Tulips: weeks
GPUs: 6 years
Fiber: 20-50 years
Rail, roads, bridges: 50-100+ years
Hyperscalers closer to tulips than other hard infra.
the only reason any “maintenance” on them is expensive is corruption which at municipal level rivals current administration in some places
LLMs+Data centres on the other hand...
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
I would love to see another report that isn't a year old with actual ROI figures...
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
[0] https://fox4kc.com/news/google-confirms-its-behind-new-data-...
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
And what is the ROI on either of those right now?
If they were laid on a sensible route, completed on budget and time, and savvily operated. Many railroads went bust.
We aren't even getting infrastructure out of it, they are just powering it with gas turbines..
Trying to design a cancer cure by setting a trillion alight on AI is like trying to achieve UBI by funneling citizen's taxes into Polymarket, so they may operate their free supermarket.
[0] https://www.youtube.com/watch?v=ijTxAfFUHkY
We always wish that our doctors would stay up to date on all of the current medical literature as they practice, and some of them do. In theory, AI systems could greatly accelerate a person's ability to retrieve and extract insights from the current body of knowledge.
Of course, that is highly fraught, but, in theory, I think I see what they're going for.
Medical treatment has never been about asking questions and getting perfect answers. Excellent doctors and nurse practitioners have a great intuition for which questions to ask based on cues during patient assessment.
Writing prescriptions?
Ok, I can see how AI could theoretically do that (assuming it doesn’t hallucinate and kill a bunch of people). Oh and don’t think it’ll be so easy to give AI the legal authority to prescribe controlled substances. And insurance companies may take issue with expensive prescriptions written by a chat bot.
Perform surgeries? Stitch wounds?
That’s decades away. And that also opens a legal can of worms. Maybe the AI lawyers can figure something out.
I’m getting my popcorn ready for the bubble pop.
>The term “hyperscale” first emerged in the late 1990s, heralding a paradigm shift in the world of computing. It was primarily used to describe the awe-inspiring scale and capabilities of data centers...
There’s a loop of everyone is saying stuff because everyone else is saying stuff that turns into a sort of reality inspired fan fiction.
It’s not just that it’s wrong or imprecise, that I expect, it’s that the folklore takes on a life of its own.
The US spent ~$12 trillion in ~2024 dollars on nuclear weapons between 1940 and 1996, and the vast majority of that spending was in the 1950s and early 1960s.
https://en.wikipedia.org/wiki/Nuclear_weapons_of_the_United_...
Or is this "we said we are going to invest $X"? What about the circular agreements?
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
An analogy would be "all the money spent on transportation infra" over some period of time.
~$6.5 trillion
edit - sorry, it is in fact adjusted, text is kinda hard to see
I certainly think it was a mistake.
The only problem is, if AI doesn’t solve cold fusion, we’re back to square one. And a few trillion dollars in the hole.
Then the first question we ask it is: 'How do we fix climate change?' And it answers: 'you can start by unplugging me'
And that point is right before rock bottom.