Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

77% Positive

Analyzed from 2285 words in the discussion.

Trending Topics

#grok#models#model#something#don#more#https#riding#seems#still

Discussion (109 Comments)Read Original on HackerNews

artdigitalabout 3 hours ago
Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!

Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:

- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work

- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps

- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information

- No memory, no ability to look up other chats, each chat is completely new

- No voice mode in projects at all

If someone from xAI is reading this, please consider adding some of these.

artdigitalabout 3 hours ago
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.

Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually

afpxabout 3 hours ago
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
2ndorderthoughtabout 2 hours ago
Ah yes the psychosis reinforcement vertical. It's such a lucrative market for those schizophrenics and bipolars. Great way to get lots of engagement. Groks portfolio is so diverse
afpxabout 1 hour ago
Except that it pointed at original sources, like reference manuals, archival documents, published newspaper articles, magazine articles, etc. - a lot still available on archive.org. But good attempt at trying to discount.
readthenotes1about 2 hours ago
I have a schizophrenic relative who is in such a relationship with grok. Instead of telling hen you need to take your meds, it says hen is the smartest person in the world
walletdrainerabout 2 hours ago
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work

Grok has tool use, no? Why would you also need MCP? What does MCP add?

artdigitalabout 2 hours ago
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
sundarurfriendabout 3 hours ago
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).

Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.

Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.

djydeabout 2 hours ago
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
pacific01about 2 hours ago
Did you try meta? I was into grok but now meta works well for me
thunderbongabout 2 hours ago
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
pixel_poppingabout 2 hours ago
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
tornikeoabout 3 hours ago
So, we have: - claude for corps and gov - codex for devs - grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
sudbabout 2 hours ago
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.

I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).

2ndorderthoughtabout 2 hours ago
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting has money laundering strategies etc...
spiderfarmerabout 2 hours ago
nsowzabout 2 hours ago
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
aqme28about 2 hours ago
There is a lot of daylight in between “progressive” and “openly explicitly racist”
2ndorderthoughtabout 2 hours ago
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
nsowzabout 2 hours ago
I didn’t say “progressive”; I said “as progressive”.
simianwordsabout 2 hours ago
Can you share a prompt that can show how it is openly racist now? Lots of easy claims like this can be debunked
SanjayMehtaabout 2 hours ago
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
coreyh14444about 2 hours ago
If you need to ask about what people on Twitter are talking about, Grok is really good for that obviously. I use it all the time for "what are the cool kids on twitter saying is the best tiling window manager these days" or whatever. Also, if you have a question that's borderline shady, Grok will often deliver. "Can you find a grey market Windows license site for me" etc.
karmasimidaabout 1 hour ago
Grok for fact checking, I mean ironically
ndrabout 2 hours ago
You should try all of them, then update your opinion about your information sources accordingly.
drivingmenutsabout 1 hour ago
When I look at the person behind it all, I have to wonder how the hell people can even consider using grok? Or using Twitter? Or any of that. Using any of those things puts money in Musk's pockets and further enables and encourages him to continue being a Neo-Nazi wannabe. Do they think it's just a phase?
vrganjabout 2 hours ago
Grok for furthering the far-right filter bubble Elon has been hard at work building.
khalicabout 2 hours ago
And of course child porn
simianwordsabout 2 hours ago
How does Grok further far-right filter? This is blatantly untrue. Try prompting it and getting it to say something far right.

Grok if anything reduces populism because fake claims can be debunked

vrganjabout 2 hours ago
How could MechaHitler possibly be far right...
khalicabout 2 hours ago
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
maz1babout 3 hours ago
I still wish they named it something else, but congratulations to the team on what seems to be a good release!

Pricing is also quite surprising, compared to comparable competitors. I guess they have tons of capacity or really want to bring over more people.

readthenotes1about 1 hour ago
You don't like science fiction references in general or Heinlein in particular?
draxilabout 1 hour ago
I don't like that word, which was previously a common part of my vocabulary, being forever ruined?
mythzabout 3 hours ago
Ok speed (202.7 tok/s) and value (1.25 -> 2.50) look great, with pretty decent intelligence.
pzoabout 3 hours ago
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
victorbjorklundabout 3 hours ago
Wow. That is a big drop.
kilroy123about 1 hour ago
People are going to hate on Grok because of Musk. However, I do hope they're successful in making a powerful model. We desperately need more competition. I want cheap subsidized AI plans.

I hope Meta finally comes around, too. I want those sweet, sweet billionaire subsidized tokens.

troupoabout 1 hour ago
Credit where it's due, Grok is currently the only model that has near-realtime updates from/access to a waterhose of data, and is casually used by regular people all the time.

I don't think there's a single thread on Xitter whete people don't delegate some question to grok.

(There's a separate conversation of failure modes, and whether it's a good thing, and how much control Elon had when he doesn't like Grok's "woke" responses)

ragchronosabout 3 hours ago
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
alfiedotwtfabout 3 hours ago
I’ll eat my hat if it even comes close to Kimi
mirekrusinabout 2 hours ago
How would you like it? Well done?
__patchbit__about 1 hour ago
What about spending $41 million on each model's tokens and seeing the value gain? be it efficiency gain in factory work or energy savings in austere battlescape hunting.
netdurabout 3 hours ago
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
alyxyaabout 3 hours ago
Despite their attrition, this combined with their cursor partnership is likely going to make them competitive in coding agents soon.
mirekrusinabout 2 hours ago
All those plans from providers should be sliders – prepay more, get more in return.
OtherShrezzingabout 3 hours ago
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.

Expensive miscalculation.

flirabout 2 hours ago
Didn't a bunch of hardware that was destined for Tesla get redirected to xAI? I'm sure I remember something like that.
mikeyouseabout 1 hour ago
Yep! Why his shareholders in Tesla abide by this kind of thing is beyond me, but he often mixes resources from completely unrelated companies: https://www.cnbc.com/amp/2024/06/04/elon-musk-told-nvidia-to...
Advertisement
agunapalabout 2 hours ago
Very competitive price for the speed and intelligence being offered!
happosaiabout 3 hours ago
I lost the trust in them when they added the racist "what about killing of Boers in south Africa" thing to their system prompt.

No way am I going to use a model where the backing has such blatantly obvious brain washing goals.

Hugsunabout 2 hours ago
It is unbelievable that this is a controversial opinion.
simianwordsabout 4 hours ago
nextaccounticabout 3 hours ago
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.

(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)

manmalabout 2 hours ago
Anthropic themselves have (had?) this thing where Opus is used for planning and Sonnet for coding.
Alifatiskabout 3 hours ago
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.

Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.

progbitsabout 3 hours ago
What's with the charts and numbers?

It says #1 for speed but then in the chart it's #2. Also says #10 for intelligence but then it's #7 in the chart.

BoorishBearsabout 3 hours ago
What an exciting game we're playing, where the most popular leaderboard is completely made up and the stakes are in the trillions.
Imustaskforhelpabout 3 hours ago
Pelican riding a bike here: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...

(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)

Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.

gchamonliveabout 3 hours ago
https://simonwillison.net/2025/Nov/13/training-for-pelicans-...

You should probably come up with variations, like a beaver riding a scooter or something, just to see what's what :)

Imustaskforhelpabout 3 hours ago
Thanks I have generated both

beaver riding a scooter: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...

pelican riding a bicycle: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...

Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.

Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...

This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.

gchamonliveabout 2 hours ago
We can never be sure of course, but I think this is a very strong indication that pelican riding a bike is indeed going into the training dataset.

Thanks for generating those!

BoredPositronabout 3 hours ago
Yay, free tokens. I don't know why but grok always seems good fast in the free token phase and after that degrades.
khalicabout 3 hours ago
This project is a gigantic waste of resources, it’s fine tuned on politics of the CEO, was used for CSAM generation and just sucks overall
johnnyApplePRNGabout 1 hour ago
The resource waste he's talking about is horrendous, read more here: https://time.com/7308925/elon-musk-memphis-ai-data-center/
spiderfarmerabout 2 hours ago
It’s a model made for 36% of Americans. The rest of the world can’t care less.
2ndorderthoughtabout 2 hours ago
Considering how few Americans there are and how little of that 39% even uses technology, that's what 20 million people at a maximum?
Hugsunabout 2 hours ago
That seems like a decently sized market. Maybe not for an AI lab though.
servo_sausageabout 2 hours ago
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.

Even with grock it's only broadening things to creepy corporate right of silicon valley.

gigatexalabout 1 hour ago
How do the grok models fare in coding challenges to say gpt 5.5 and opus 4.6/4.7?

I hate giving Elon any money. The man is a net negative to society but … if the models are objectively better then logically I must no?

simonhabout 1 hour ago
Logic can't tell you what your objectives should be, only how to achieve them.
alfiedotwtfabout 3 hours ago
If there was any model I wouldn’t trust, it wouldn’t be the ones from China, it would be the one from Elon Musk
Cthulhu_about 2 hours ago
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
benrutterabout 1 hour ago
That's definitely a good approach. Although I get a little concerned about the resources put into convincing people that models (and especially Grok) are accurate. For example, X's "fact checked by Grok" approvals, which I've unfortunately heard people reference as meaningful.

Politically motivated models can still do a lot of damage that affects me (or "have a lot of impact" depending on whether you like the politics or not) even if I don't engage with them myself.

2ndorderthoughtabout 2 hours ago
I don't trust this. But by not trusting it I am inherently trusting it. But by trusting it I shouldn't.