RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
100% Positive
Analyzed from 719 words in the discussion.
Trending Topics
#agi#model#transformers#intelligence#never#human#models#proof#agent#saying

Discussion (32 Comments)Read Original on HackerNews
Interestingly, Gemma 4 26B-A4B and Qwen3.6 27B (dense) have been left out of the comparison.
The smaller models are becoming very good and quantization techniques like importance weighting and TurboQuant on model weights let you run aggressively quantized version (IQ2, TQ3_4S) on consumer hardware with extremely acceptable perplexity and quality loss.
Very exciting times for local LLMs.
Also, *ops work, which in my experience can actually be more complicated than SWE is underrepresented there obviously.
I like their honesty in benchmarks, looks like Qwen3.6 35B is outperforming their Laguna M.1 225B model
I usually score pretty well in colour perception tests but distinguishing between those two purples made me doubt myself.
One nit: I've seen on this homepage, and many others, this notion that the people behind the models are "working towards AGI".
I get that this is marketing speak, but transformers are not AGI, and they will never be AGI, so it'd be great if people stopped saying that as it sort of wears out the meaning of "working towards AGI".
Like the claim "transformers are AGI", this needs proof, otherwise should be prefixed "I think". And honestly, positive proof is easier than negative proof (you just need to make one transformer model that is a AGI, whereas the never claim requires you to enumerated all possibilities).
The negative proof is there in the definition itself. Transformers are not AGI, they're frozen human intelligence of the autocomplete variety. That can never be AGI and anyone who says otherwise doesn't understand transformers or AGI.
Transformers have approximate knowledge of many things. Is this not 'general'? Where is the goalpost here?
Of course not. That's like saying the Encyclopedia Britannica is AGI.
> What does AGI mean to you?
I would define AGI as human-like machine intelligence (or superior).
This is difficult for some people to understand because they don't understand what "human-like" means in the first place. Neuroscientists would be able to set some of these wayward computer scientists straight on this question.
But is that a hard requirement? Can a machine have Rat-like intelligence? Is all intelligence human-like (human-centric-mind-blindness-much?)?
> Of course not. That's like saying the Encyclopedia Britannica is AGI.
Well, I'd classify that as GK, general knowledge. Not artificial or intelligent.
Let's consider a definition of intelligence as the act of 'manipulating data', have you a better general definition of intelligence?
I blame it on the big companies in the space, but seeing intelligent folks regularly attributing intelligence to a complex autocomplete system is disappointing.