Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

69% Positive

Analyzed from 996 words in the discussion.

Trending Topics

#claude#models#model#run#local#performance#data#running#code#subscription

Discussion (28 Comments)Read Original on HackerNews

AussieWog939 minutes ago
I've tried these small models and they're nowhere near as good as Claude or GPT-5.

The new ones running on a 16GB M1 are maybe GPT-4 level (with decent performance to be fair).

I wonder if it's possible to make some hyper-overturned model that, say, does nothing but program in Python get SOTA-ish performance in that narrow task.

roscas19 minutes ago
BTW, LMStudio and a few others are really amazing. They allow you to download models from HF and manage many details before load them. A medium pc with an 8 or 10gb graphics card is already a nice setup to run many models, that are really good. You can also run Ollama that is very simple to use and help you code on vscodium with Continue. Pretty nice!
_345about 3 hours ago
It's a seriously degraded experience from a developer's perspective. Okay you've got one local LLM installed finally after configuring everything perfectly, what happens when you want to run a second instance? Now you've blown past your vram and system ram limits, and you're stuck to just one.

Furthermore, the model they recommend doesn't quite reach ~gpt-5.4-mini level performance- that quality dip means you may as well just pay for something like Kimi K2.6 via openrouter if you want a something ~>= sonnet 4.6 in performance as a backup for when you run out of anthropic/openai usage.

xscottabout 3 hours ago
Your point about caliber/quality is fair, but I have been pretty astonished by some of the newer/better models (Gemma 4 variants, GPT-OSS before that).

However, there's not a lot of memory increase to have multiple sessions in parallel with one model. It's an HTTP server, and other than some caching, basically stateless.

iibabout 2 hours ago
Doesn't llama.cpp (or similar) have to evict the kv cache for this, so that performance is degraded when running multiple sessions? Or how do you load a model in memory and then use it in multiple sessions? I am still learning this stuff
0xbadcafebeeabout 3 hours ago
Not sure why you got downvoted. 95% of people should be paying for a subscription. It's far cheaper, far more scalable, and far less hassle.

Local AI only makes sense for a couple of use cases:

  - Privacy
  - Constant churning on tokens
  - Latency
  - Availability
Local AI is "cheaper" when you already have the hardware sitting around, like an old MacBook or gaming GPU, or the API cost (subscriptions will all run out if you churn 24/7) is too high to bare. I'm surprised companies are still selling their old MacBooks to employees, when they could be turning them into Beowulf clusters for cheap AI compute on long-running jobs (the cost is just electricity)

If usage-based pricing is killing your vibe, find a cheaper subscription with higher limits. Here's a list of them compared on price-per-request-limit: https://codeberg.org/mutablecc/calculate-ai-cost/src/branch/...

xscottabout 3 hours ago
I think you're right about the cost/benefit trade-off in general, but I do wonder how much "compaction" Codex and Claude do is to keep context fresh and how much is to save (them) runtime costs.

If you've got a 1M token context, but they constantly summarize it down to something much smaller, is it really 1M tokens of benefit? With a local model, you can use all 256k tokens on your own terms. However, I don't have any benchmarks to know.

ls612about 1 hour ago
I recently set up a Gemma 4 heretic fine tune on my MacBook to prove that I could more than anything else and it is probably around 4o levels of performance imo. Not fit for any real work. That said the fact that 4o was frontier two years ago and today I can equal it on local hardware and uncensored is pretty impressive.
otabdeveloper4about 1 hour ago
> 95% of people should be paying for a subscription.

Subscription plans are the "first hit is free" plans. Real pricing once subscriptions are phased out in a year or two is gonna be orders of magnitude more.

2ndorderthoughtabout 3 hours ago
Why are you running 2 instances anyways? If you want that workflow just rent a few ec2 gpu instances and fire away?
vidarhabout 3 hours ago
If you're going to rent a few ec2 gpu instances you might as well funnel things through openrouter. Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not.

As for why, why would you not? Sitting around waiting for a single assistant is inefficient use of time; I tend to have more like 4-10 instances running in parallel.

2ndorderthoughtabout 2 hours ago
I absolutely see no reason to send company IP, future plans, and current code base to any other company.

I also do not run 10 agents at the same time. There's no way I could keep up with the volume of work from doing that in any meaningful way

jen20about 3 hours ago
> Not that many of us have workflows where trusting an LLM provider is a problem but sending the data to EC2 is not.

I'd imagine plenty of people have a problem with trusting fly-by-night inference providers or model owners with opt-out policies [1] [2] about training on your data, who would be more than happy to send data to EC2, or even the same models in Amazon Bedrock.

[1]: https://github.blog/news-insights/company-news/updates-to-gi...

[2]: https://help.openai.com/en/articles/5722486-how-your-data-is...

roscas28 minutes ago
Local AI does not mean privacy or offline. Claude code does not run offline. It needs an internet connection.

"./claude-2.1.126-linux-x64

Welcome to Claude Code v2.1.126

Unable to connect to Anthropic services

Failed to connect to api.anthropic.com: ECONNREFUSED

Please check your internet connection and network settings.

Note: Claude Code might not be available in your country. Check supported countries at https://anthropic.com/supported-countries"

Let me also add that most of services that are private, will connect to the internet. LMStudio and many others will try to get a connection and all others. I don't remember a single one that does not connect to their servers and send some kind of information.

janice1999about 3 hours ago
A 24GB Nvidia RTX 3090 TI is ~2000 euro.
2ndorderthoughtabout 3 hours ago
Which is how many months of Claude or Claude + chatgpt when Claude is down? And do you own anything after using those subscriptions? Can you pick and choose from dozens of models and whatever comes next? Can you play video games with your Claude subscription?
beej71about 3 hours ago
Believe me when I say that I want to run local models, and I do. But in my testing, 24 GB doesn't get you much brainpower.
2ndorderthoughtabout 2 hours ago
Have you tried the latest qwen3.6 models?

For most of my questions and 8-9b model works great. Upshot is not having chatgpt/meta sell my data or target me with random thoughts later.

efficaxabout 2 hours ago
qwen3.6 does a good job locally except it can take 20-30 minutes to respond to a prompt on a mac studio with 32gb of ram.
smcleod44 minutes ago
Apple Silicon before the M4 does not have matmul instructions which causes the prompt processing to be very slow. It's quite different on the M5, much like using a nvidia GPU
2ndorderthoughtabout 2 hours ago
Yea you probably do want to use a GPU for models of that size.

I also wonder what quantization you are using? If you haven't tried other quants I really would

efficaxabout 2 hours ago
This is qwen3.6:27b-coding-nvfp4. It's only an M1. If they ever ship an M5 studio with 96GB of ram, that's my next upgrade path for the local llm experiments.

You can get work done with them if you have a harness that can drive outcomes without needing feedback (I've been building a tdd red to green agent harness lately that is very effective if given a good plan upfront). So if you can stand waiting a few days to see results that would only take hours with a model deployed to frontier nvidia hardware, you can get results this way.

datadrivenangelabout 2 hours ago
The time delay is the real issue. Much much slower wall clock time.