Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 368 words in the discussion.

Trending Topics

#models#don#hardware#https#infrastructure#bubble#same#enough#terms#yet

Discussion (7 Comments)Read Original on HackerNews

cs702•about 2 hours ago
Was this written by an LLM? It def reads like it!
pierrekin•about 2 hours ago
Yes, it was created by a cool called Sourcery.

Sourcery Show HN: https://news.ycombinator.com/item?id=47996426

The project is currently private, I'd love to have access to its source.

jubilanti•about 2 hours ago
Oh so some random user with no credentials, reputation, or real name just typed "do deep research on the AI infrastructure financial bubble and write a report" then submitted it to HN?

Why should I bother reading what may or may not be a pile of unverified hallucinations?

gizajob•about 2 hours ago
An LLM who can’t format html so crams it in an ugly spread across a pdf.
mnky9800n•about 3 hours ago
This is the same bubble everyone is talking about for a while now. GPUs don’t last long enough to justify the infrastructure cost. But what if they exist for longer than current estimated life cycles? That’s always my question as someone using A80s and A100s in 2026.
mattmight•about 2 hours ago
Wondering the same, and in somewhat different terms.

And as models shrink in size yet go up in intelligence and performance, I'm finding ever more life in older hardware.

When I got my M1 Max in 2021, GPT-3 was about 1.5 years old and it was SOTA.

Yet, that machine is now able to run models that crush with gpt4, and even compete with o1 (SOTA from about 1.5 years ago.)

The idea that I could run something like that locally would have seemed absurd in 2021.

Yet, if somehow I'd had those local models in 2021 on the exact same hardware, I would have had, by far, the most powerful AI on the planet -- and that would have remained true for the next several years.

I'm also noticing that the ever-improving smaller models I can run on this machine are crossing the "good enough" threshold for ever more tasks by the month.

I just don't need a frontier model for every task.

I have an M4 Max 128 GB RAM now, but I still find plenty of tasks to delegate to the M1 Max machine.

I don't know how far this can go in the limit in terms of packing more intelligence into smaller models, but older hardware, if maintained well, seems like it's going to increase the value it can deliver in terms of "intelligence per watt-hour."

chermi•about 2 hours ago
Hasnt exactly the opposite proven true? I thought they were actually appreciating in some cases? As token value goes up, old hardware becomes valuable.
latchkey•about 2 hours ago
> GPUs don’t last long enough to justify the infrastructure cost.

I'm CEO of an AMD neocloud. Confirming this is a myth.

https://x.com/HotAisle/status/2045181374030856300

drzaiusx11•about 3 hours ago
Is it really hidden if it's hiding in plain sight?
jsksoswk•about 3 hours ago
Yes, because it’s still hiding. I can hide from a clicker because they can’t see me. But if I make a noise, they might suck my face off and inject with spores.
kiddico•about 2 hours ago
Not sure if that cultural touch point is as well known as you feel it is lol.
GavinAnderegg•about 3 hours ago
There's also an HTML version here: https://financial-ai-bubble.pagey.site