HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 4476 words in the discussion.
Trending Topics
#google#gemini#models#inference#model#claude#don#hardware#more#still

Discussion (109 Comments)Read Original on HackerNews
They produce drastically lower amount of tokens to solve a problem, but they haven't seem to have put enough effort into refinining their reasoning and execution as they produce broken toolcalls and generally struggle with 'agentic' tasks, but for raw problem solving without tools or search they match opus and gpt while presumably being a fraction of the size.
I feel like google will surprise everyone with a model that will be an entire generation beyond SOTA at some point in time once they go from prototyping to making a model that's not a preview model anymore. All models up till now feel like they're just prototypes that were pushed to GA just so they have something to show to investors and to integrate into their suite as a proof of concept.
Agreed, Gemini-cli is terrible compared to CC and even Codex.
But Google is clearly prioritizing to have the best AI to augment and/or replace traditional search. That's their bread and butter. They'll be in a far better place to monetize that than anyone else. They've got a 1B+ user lead on anyone - and even adding in all LLMs together, they still probably have more query volume than everyone else put together.
I hope they start prioritizing Gemini-cli, as I think they'd force a lot more competition into the space.
Using it with opencode I don't find the actual model to cause worse results with tool calling versus Opus/GPT. This could be a harness problem more than a model problem?
I do prefer the overall results with GPT 5.4, which seems to catch more bugs in reviews that Gemini misses and produce cleaner code overall.
(And no, I can't quantify any of that, just "vibes" based)
When a lot of people ask the same thing they can just index the questions, like a results on the search engine and recalculate it only so often,
This experience makes me believe they have highly advanced AI internally and see no reason and have no will sharing. OpenAI and Claude FORCED them to release what they can, just to stay relevant.
The TPU's are damn awesome and I would love to fab them in small for myself. But it's fully closed sourced I'm afraid. Also Google is known to hate the customer, more or less.
(disclosure: I am long GOOG, for this and a few other reasons)
And the converse is true also. I mean, look at NVIDIA. For the longest time they were just a gaming card company, competing with AMD. I remember alternating between the two companies for my custom builds in the 90s and it basically came down to rendering speed and frame rate.
But Jensen bet on the "compute engine" horse and pushed CUDA out, which became the defacto standard for doing fast, parallel arithmetic on a GPU. He was able to ride the BitCoin wave and then the big one, DNNs. AMD still hasn't caught on yet (despite 15 years having gone by).
3.1 pro is just fundamentally not on the same level. In any context I've tried it in, for code review it acts like a model from 1yr ago in that it's all hallucinated superficial bullshit.
Claude code is significantly less likely to produce the same (yet still does a decent amount). Gpt 5.4 high/xhigh is on another level altogether - truly not comparable to Gemini.
Cook did very well in all areas as well as in not trying to create a cult.
Honestly im rather impressed with how they handled it, they had enough of the infra and org in place to jump at it once the cat was out of the bag.
Sundar declared a code red or whatever and they made it happen. But that could ONLY happen if they had the bedrock of that ability already built.
No one really remembers now that google was a year behind.
The Google Antigravity subreddit is a shitshow though:
https://www.reddit.com/r/GoogleAntigravityIDE/
Like Apple Intelligence? Which was quite crap
Anthropic and OpenAI are having to fight like hell to secure market share. Google just gets to sit back and relax with its browser and android monopolies.
Why did our regulators fall asleep at the wheel? Google owns 92% of "URL bar" surface area and turned it into a Google search trademark dragnet. Now Anthropic has to bid for its own products against its competitors and inject a 15+% CAC which is just a Google tax.
Now consider all the bullshit Google gets to do with android and owning that with an iron fist. Every piece of software has a 30% tax, has to jump through hoops, and even finding it is subject to the same bidding process.
These companies need to be broken up.
Google would be healthier for the economy and its own investors as six different companies. And they shouldn't be allowed to set the rules for mobile apps or tax other people's IP and trademarks.
Of course they should have to fight with the inventors of the technology they’re using.
Sam Altman's honesty problems, and Elon buying a VS code fork for $60 billion isn't a sign of moral uprightness or wisdom.
There's a lot to be said for grinding away at a problem. Being on your eighth generation AI chip and seventh generation of autonomous driving hardware is how you build value. Not by hobnobbing with fascists and building an army of stock pumping retail investors.
They're helping close to the distance to realistic quality inference on phones and other smaller devices.
that said, I actually agree: google IMHO silently dominates the 'normie business' chatbot area. gemini is low key great for day to day stuff.
It's hard to reconcile this because Google likely has the most compute and at the lowest cost, so why aren't they gassing the hell out of inference compute like the other two? Maybe all the other services they provide are too heavy? Maybe they are trying to be more training heavy? I don't know, but it's interesting to see.
I was planning on comparing them on coding but I didn't get the Gemini VSCode add-in to work so yeah, no dice.
The Android and web app is also riddled with bugs, including ones that makes you lose your chat history from the threads if you switch between them, not cool.
I'll be cancelling my Google One subscription this month.
I see it like going to the doctor and asking them to cite sources for everything they tell me. It would be ridiculous and totally make a mess of the visit. I much prefer just taking what the doctor said on the whole, and then verifying it myself afterwards.
Obviously there is a lot of nuance here, areas with sparse information and certainly things that exist post knowledge cut-off. But if I am researching cell structure, I'm not going to muck up my context making it dig for sources for things that are certainly already optimal in the latent space.
GPT (codex) was accurate on the first run and took 12 minutes
Gemini (antigravity) missed 1 value because it didn't load the full 1099 pdf (the laziness), but corrected it when prompted. However it only spent 2 minutes on the task.
Claude (CC) made all manner of mistakes after waiting overnight for it to finish because it hit my limit before doing so. However claude did the best on the next step of actually filing out the pdf forms, but it ended up not mattering.
Ultimately I used gemini in chrome to fill out the forms (freefillableforms.com), but frankly it would have been faster to manually do it copying from the spreadsheets GPT and Gemini output.
I also use anti-gravity a lot for small greenfield projects(<5k LOC). I don't notice a difference between gemini and claude, outside usage limits. Besides that I mostly use gemini for it's math and engineering capabilities.
Interesting that there's separate inference and training focused hardware. Do companies using NV hardware also use different hardware for each task or is their compute more fungible?
Dedicated hardware will usually be faster, which is why as certain things mature, they go from being complicated and expensive to being cheap and plentiful in $1 chips. This tells me Google has a much better grasp on their stack than people building on NVidia, because Google owns everything from the keyboard to the silicon. They've iterated so much they understand how to separate out different functions that compete with each other for resources.
One reason is that most clouds/neoclouds don't own workloads, and want fungibility. Given that you're spending a lot on H200s and what not it's good to also spend on the networking to make sure you can sell them to all kinds of customers. The Grok LPU in Vera Rubin is an inference-specific accelerator, and Cerebras is also inference-optimized so specialization is starting to happen.
https://www.amd.com/en/products/accelerators/instinct.html
The cost can also change dramatically: on top of the higher token costs for Gemini Pro ($1.25/mtok input for 2.5 versus $2/mtok input for 3.1), the newer release also tokenizes images and PDF pages less efficiently by default (>2x token usage per image/page) so you end up paying much much more per request on the newer model.
These are somewhat niche concerns that don't apply to most chat or agentic coding use cases, but they're very real and account for some portion of the traffic that still flows to older Gemini releases.
This seems impressive. I don't know much about the space, so maybe it's not actually that great, but from my POV it looks like a competitive advantage for Google.
> One pod of TPU 8t is 121 ExaFlops; or 121,000 PetaFlops.
Meanwhile, the compute capacity of the top 10 supercomputers in the entire world is 11,487 Petaflops.
I know, I know, not the same flops, yada yada, but still. Just 1 pod alone is quite a beast.
Owning your hardware and your entire stack is huge, especially these days with so much demand. Long term, I think they end up doing very well. People clowned so hard on Google for the first two years (until Gemini 2.5 or 3) because it wasn't as good as OpenAI or Anthropic's models, but Google just looked so good for the long game.
Another benefit for them: If LLMs end up being a huge bubble that end up not paying the absurd returns the industry expects, they're not kaput. They already own so many markets that this is just an additional thing for them, where as the big AI only labs are probably fucked.
All that said: what the hell do I know? Who knows how all of this will play out. I just think Google has a great foundation underneath them that'll help them build and not topple over.
IMHO that happy medium is Google. Not having to pay the NVidia tax will likely be a huge competitive advantage. And nobody builds data centers as cost-effectively as Google. It's kind of crazy to be talking ExaFLOPS and Tb/s here. From some quick Googling:
- The first MegaFLOPS CPU was in 1964
- A Cray supercomputer hit GigaFLOPS in 1988 with workstations hitting it in the 1990s. Consumer CPUs I think hit this around 1999 with the Pentium 3 at 1GHz+;
- It was the 2010s before we saw off-the-shelf TFLOPS;
- It was only last year where a single chip hit PetaFLOPS. I see the IBM Roadrunner hit this in 2008 but that was ~13,000 CPUs so...
Obviously this is near 10,000 TPUs to get to ~121 EFLOPS (FP4 admittedly) but that's still an astounding number. IT means each one is doing ~12 PFLOPS (FP4).
I saw a claim that Claude Mythos cost ~$10B to train. I personally believe Google can (or soon will be able to) do this for an order of magnitude less at least.
I would love to know the true cost/token of Claude, ChatGPT and Gemini. I think you'll find Google has a massive cost advantage here.
Google could probably train models for orders of magnitude less money as you say, but they aren't. They are not capable of creating high quality models like OpenAI and Anthropic are. Their company is just too disorganized and chaotic.
Anecdotally, I don't know a single person who uses Gemini on purpose.
What if somebody cracks the problem if splitting inference between local and remote? What if someone else manages so modularize learning so your local LLM doesn't need to have been trained on how to compute integrals? Obviously we can't disect a current LLM and say "we can remove these weights because they do math" but there's no guarantee there isn't an architecture that will allow for that.
Apple could also be training an LLM Siri 2.0 that knows enough to do the things you want. Setting alarms, sending messages, etc. Apple would have all the information on what the major use cases are and where Siri is currently failing. They can increase Siri's capabilities as local LLM inference improves.
As for Google creating high quality models, I personally believe the models are going to be commoditized. I don't believe a single company is going to have a model "moat" to sustain itself as a trillion dollar company. I base two reasons for this:
1. At the end of the day, it's just software and software is infinitely reproducible and distributable. I mean we already saw one significant Anthropic leak this year; and
2. China is going to make sure we're not all dependent on one US tech company who "owns" AI. DeepSeek was just the first shot across the bow for that. It's going to be too important to China's national security for that not to happen.
And OpenAI's entire funding is predicated on that happening and OpenAI "winning".
Can you cite this? That seems absurd.
I've seen figures that suggest GPT-4 was 1.8T parameters and cost upwards of $100 million to train (also unsubstantiated), in which case the Mythos figure might be inflated and also include development costs.
So who really knows?
[1]: https://www.softwarereviews.com/research/claude-mythos-previ...
[2]: https://x.com/duttasomrattwt/status/2041903600516133016
[3]: https://www.forrester.com/blogs/project-glasswing-the-10-con...
If the whole AI bubble spectularly collapes, at least we got a lot of cool pics of custom hardware!
Every other news for the past month has been about lacking capacity. Everyone is having scaling issues with more demand than they can cover. Anthropic has been struggling for a few months, especially visible when EU tz is still up and US east coast comes online. Everything grinds to a halt. MS has been pausing new subscriptions for gh Copilot, also because a lack of capacity. And yet people are still on bubble this, collapse that? I don't get it. Is it becoming a meme? Are people seriously seeing something I don't? For the past 3 years models have kept on improving, capabilities have gone from toy to actually working, and there's no sign of stopping. It's weird.
Demand for internet and web services is significantly higher today than in 2000 but a bubble still popped. Heck a regular old recession or depression, completely unrelated to AI could happen next year and could collapse the industry. I mean housing is more expensive than ever nearly 20 years after collapsing in the Great Recession.
The way this could happen is if model commoditization increases - e.g. some AI labs keep publishing large open models that increasingly close the gap to the closed frontier models.
Also, if consumer hardware keep getting better and models get so good that most people can get most of their usage satisfied by smaller models running on their laptop, they won't pay a ton for large frontier models.
Though nowadays it feels like the bubble is going to end up being mainly an OpenAI issue. The others are at least vaguely trying to balance expansion with revenue, without counting on inventing a computer god.
Thanks for posting otherwise.
Edit: actually, looks like the header got captured as a figure caption on accident.