Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 260 words in the discussion.

Trending Topics

#chips#admission#bottleneck#memory#dedicated#inference#shifted#scaling#different#google

Discussion (10 Comments)Read Original on HackerNews

zshn25about 3 hours ago
Splitting TPUs into dedicated training vs inference chips feels like an admission that the bottleneck has shifted from FLOPs to memory bandwidth + latency. Are future gains to come more from memory/system design than raw compute scaling? What’s that saying about Scaling laws?
xnxabout 3 hours ago
> Splitting TPUs into dedicated training vs inference chips feels like an admission that the bottleneck has shifted from FLOPs to memory bandwidth + latency.

With the expected scale of inference, it makes cost sense to make dedicated hardware for each task if the workloads are even slightly different. Probably similar to the video decoding chips in TVs not being very cheap/efficient compared to chips capable of encoding video.

sdenton4about 2 hours ago
I think the first two paragraphs of the post are exactly saying that the bottleneck is memory... Long contexts, bigger but less flop-intensive models (moe's).

The funny thing about scaling laws is that as soon as they were known, the whole objective became learning how to break them - bending the curve, at least. They provided an incredibly useful target, but 'law' was a bit too strong a word.

mathisfun123about 2 hours ago
> admission that the bottleneck has shifted

There's no admission - this has always been known.

juancnabout 1 hour ago
Super interesting but it's so damn hard to find any detail.

I would love to see an instruction set reference for one of these, all you have is hardware architectural diagrams or high level APIs.

ttulabout 2 hours ago
No matter how smart your large language model is, if you can’t find the energy to power it, it won’t run. I could imagine Google winning merely because their chips are more efficient. Of course, the other labs are capable of making chips, but Google has been doing it for years.
speedpingabout 2 hours ago
2.764 petabytes of HBM per 8i? So that's where all the RAM went.
londons_exploreabout 2 hours ago
288 TB/pod (1024 chips).
ricardo81about 2 hours ago
QuantumNomad_about 2 hours ago
They are different blog posts, written by different people at Google