Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

81% Positive

Analyzed from 1367 words in the discussion.

Trending Topics

#caveman#prompt#brief#per#run#claude#same#model#reasoning#tokens

Discussion (39 Comments)Read Original on HackerNews

BewareTheYiga27 minutes ago
Caveman made me laugh and that, in theory, should count for something.
bombcar13 minutes ago
Grug like caveman. Grug think author should have used “be brief” on article.
0xbadcafebee38 minutes ago
I tell chats to "be brief" all the time when they're being too verbose, but I never thought to put it in coding agent instructions. Thanks for the benchmark! I wonder how one would put this in AGENTS.md so that it makes sense as a general instruction?
avaerabout 1 hour ago
Thanks for the research!

Though I feel like industry veterans (especially those working with LLMs) came to this conclusion without having to write a single prompt. Even ignoring the technical merits of these kinds of hacks, if you think you've outwitted billions of dollars of statistics with a prompt, you're probably wrong at this point.

What I find most interesting is the popularity of these snake oils, especially the ones that are easy to install and never check. The tech moves so fast and the research is so scarce and poor-quality that the bullshit asymmetry principle wins and people buy into these cargo cults.

Maybe we need a plugin to check if a new plugin/prompting technique/LLM lifehack is BS.

max-t-dev21 minutes ago
I think there is some benefit to plugins, it's hard to say how much. I find the superpowers plugin is quite good, mostly in its structured approach to a conversation. Generally they do feel pretty overhyped.
0xbadcafebee31 minutes ago
The thing is they're not BS when they're released. Prompt Engineering was a real thing that had real results, but then they re-trained the models and now prompt engineering isn't needed on large models. Techniques are gonna vary over time.
oezi41 minutes ago
Maybe we need a term such as prompt homeopathy to call out prompt engineering without any empirical proof.
max-t-dev22 minutes ago
Hahaha
max-t-devabout 5 hours ago
Author here. Caveman is a popular Claude Code plugin that compresses Claude's responses via a custom skill with intensity modes. I wanted to know whether it actually beats the simplest possible alternative, prepending "be brief." to prompts. 24 prompts, 5 arms, judged by a separate Claude against per-prompt rubrics covering required facts, required terms, and dangerous wrong claims to avoid. 120 scored responses, 100% key-point coverage across every arm, zero must_avoid triggers. Headline: "be brief." matched caveman on tokens (419 vs 401-449) and quality (0.985 vs 0.970-0.976). Caveman has real value beyond compression. Consistent output structure, intensity modes, the Auto-Clarity safety escape. But the compression itself isn't the differentiator I expected. Harness is open source and strategy-agnostic if anyone wants to add an arm: https://github.com/max-taylor/cc-compression-bench Happy to answer questions about methodology, the per-category variance findings, or the bits I cut from the writeup.
dataviz1000about 2 hours ago
> there was 1 run per prompt per arm

My understanding is that there was only 1 run per configuration?

If that is correct, because of the run-to-run variability, it really doesn't say much. It will take several trails per prompt per arm before it will look like it is stabilizing on a plot. It is prohibitively expensive so I've been running same prompt, same model 5 times in order to get a visual understanding of performance.

Someone did the same with lambda calculus yesterday. I wanted to make the point about how much run-to-run variability and difference in cost with the same prompt with the same model running only 5 trials. I classified each of the thinking steps using Opus 4.6 (costs ~$4 in tokens per run just for that) and plotted them with custom flame graphs. [0]

When the run-to-run variability is between 8,163 and 17,334 tokens none of these tests mean that much.

[0] https://adamsohn.com/lambda-variance/

max-t-devabout 2 hours ago
Yeah fair point. The benchmark is single-run per arm-prompt pair, so the variance finding on safety categories could be noise rather than signal. Findings doc flags this for the score deltas (anything under 0.02 between arms is in the judge's noise floor) but I should have applied the same caveat to the per-question token variance, which I didn't. Will read the lambda variance write-up. Multi-trial with cost classification is the right direction. The single-shot harness was deliberately scoped for a clean compression-only comparison before adding turns or trials, but you're right that without trials the variance findings aren't as solid. Thanks for the reply.
oezi39 minutes ago
When reading your summary I was wondering how much of those 400 tokens have been consumed by the caveman ruleset.
adamsmarkabout 1 hour ago
Write caveman summary too. Fast read.
ricardobeatabout 2 hours ago
Thanks for sharing this, really interesting results.

Slightly off-topic: it's quite apparent that you've used Claude as an editor for the blog post. Every sentence has been sanded smooth — the rough edges filed off, the voice flattened, the rhythm set to metronome. It doesn't read like writing anymore. It reads like content. Neat little triplets. Tidy paragraphs. A structure so polished it could pass a rubric, but couldn't hold a conversation. /s

In my opinion that is unnecessary and detracts from a great, simple piece. I miss human writing.

max-t-devabout 2 hours ago
Yeah definitely a good point, Claude assisted with editing and tidying up the content with the caveat that it can flatten the voice. I agree the humanity behind writing is disappearing and perhaps that's something I should consider in more detail next time. Thanks for the comment.
SwellJoeabout 2 hours ago
Also extremely verbose, in standard LLM slop style. Should have told Claude to "be brief" when telling it to write this post.
brcmthrowawayabout 1 hour ago
Stop using an LLM to write blog posts
0-_-0about 1 hour ago
How about caveman+be brief?
max-t-devabout 1 hour ago
As much as I wish it stacked like that I don't think it would make a difference haha
greenavocadoabout 1 hour ago
You can unlock additional compression by using a lightweight model to convert your query to wenyan‑lang before submitting it to the expensive model
deadbabeabout 1 hour ago
I wish they would change the name to caveperson.
dnauticsabout 1 hour ago
or better yet actually use "grug" which comes with architectural sense
ramesh31about 2 hours ago
Caveman sounds clever if you have no idea how LLM reasoning works. Talking through a problem out loud, in depth, is a critical part of how things like Claude Code even get to a result. Those aren't "wasted tokens", they're an integral part of how the LLM reaches a conclusion and completes its chain of reasoning.
max-t-devabout 1 hour ago
Caveman doesn't compress the reasoning, only the output. The model still does its full reasoning before generating the response, caveman just affects how the final response is formatted.
ramesh31about 1 hour ago
>The model still does its full reasoning before generating the response, caveman just affects how the final response is formatted

Right, and that final response forms the latest context for your next follow-up prompt. Not having that final reasoning laid out in the conversation history leaves a huge gap in successive reasoning. I remember playing around with this idea in the Sonnet 3.x days and it was immediately obvious how the ability to handle long running tasks degraded. If you are just doing single-shot work for some reason, sure, but that's not what most real world usage looks like these days.

magicalhippo5 minutes ago
I don't know how Claude and such do it, but latest Qwen model supports preserving reasoning between calls, which based on what I heard does help a fair bit.
lofaszvanittabout 2 hours ago
Caveman is useless for me. We are in the year 2026, computers are here to serve me, and bring me comfort. Caveman is a caveman, speaks like an idiot. I don't want to interact with an idiot. It's irritating, and as the article states, an overhyped turd.

It is the same idiocy that permeates EV cars. You buy an expensive car to go from A to B and at the same time offer you comfort. When I have to think about using the seat heating or not, I'm out of my comfort zone. So no, fuck caveman, and I don't fucking care about the burned tokens.

Be brief. It's easy, no setup needed, not another mindless mumbojumbo extension and its 325 dependencies.

gavmor4 minutes ago
Doesn't "be brief" lobotomize the model, too? The good stuff comes at the ends of difficult sentences, ie the latent gold lies at the end of fully arcing latent rainbows, no?
kingstnapabout 1 hour ago
Of the things you could complain about in modern cars as being too complicated, you chose turning on seat heating???

Like you push the seat heating button if your seat feels cold. What is there to think about?

fragmede12 minutes ago
On an electric car that yells at you your range left and that you won't make it to your destination unless you charge, if you turn on the seat warmers, that range goes down so you have to think about if you'd rather have a toasty butt and have to stop and charge, or just be colder and get there sooner. But you have to charge anyway.
max-t-devabout 2 hours ago
Agree "be brief." being simpler with no setup is most of what people need in practice. To be fair to caveman though, it does more than compression; consistent output structure, intensity modes via slash commands, hook-based ruleset persistence, the safety escape on destructive ops. The benchmark only tested the compression piece, and there the two-word prompt held its own.
loloquwowndueoabout 2 hours ago
> I don't want to interact with an idiot.

Then why are you using AI?

Not a big difference between an articulate idiot and a succinct one.

lofaszvanittabout 2 hours ago
Have to test its limits.... to cut through the bs. otherwise you'd have to read whitepapers...
adamsmarkabout 1 hour ago
But you can turn off brain. Try make self idiot. Save brain energy for important. Smarty speaks in idiot. When smarty speak like that is consistent. Idiot understand fast.

It would have been hilarious if the author spoke like a caveman in his video or had a section in that article where he explained his conclusions like a caveman.

rideontimeabout 1 hour ago
Was this actually easier to write than just writing what comes naturally?
adamsmarkabout 1 hour ago
Heck no. I had fun though.
eulgroabout 2 hours ago
I enabled it and I had to read carefully to check if it was really active... turns out I never read the words that caveman omits, so to me it makes zero difference.
max-t-devabout 2 hours ago
Yeah, makes sense. The appeal is is more to cut output tokens for cost, than downstream reading experience. But the benchmark suggests it doesn't offer as much benefit as "be brief.".
numpad0about 1 hour ago
Is caveman speech brief, or is it just more consistent with the Chinese language? The Chinese language famously lack ALL inflections, conjugations, anything that modify spelling of words.