Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 926 words in the discussion.

Trending Topics

#model#more#power#local#costs#data#run#hardware#models#datacenter

Discussion (12 Comments)Read Original on HackerNews

sponaugleabout 2 hours ago
"But there’s another challenge: local LLMs. It’s already possible to run LLMs on local hardware, and that’s only going to get easier in the future. Apple’s M-series chips are extremely good at doing this today. Open weight (read: free) models are widely available and good enough that most people probably couldn’t tell the difference. They also have the benefits of running on hardware that’s sipping power most of the time, rather than slurping it down in massive data centres."

This is such an odd and illogical conclusion. If a smaller model can be sufficient (which is not something I would have said), that smaller model can be ran in a datacenter. The idea that a small model running at home is 'sipping' while that same small model in a datacenter is 'slurping' is absurd. The datacenter will have much greater overall efficiency in both power usage and total cost to implement. Of course if you compare a small home model to a DC frontier model the power usage is different, but so is the output.

znnajdlaabout 2 hours ago
I’m beginning to challenge the assumption that datacenters are more efficient. I can get the same computing power out of a single Mac Mini 32 GB that I get from from an AWS virtual machine that costs hundreds of dollars per month. Even compared to cheap baremetal providers like Hetzner, the Mac Mini pays for itself in a few months of cloud costs. How exactly are datacenters more efficient? I don’t see it in the price. It may be the costs of centralizing large amounts of compute actually make it more expensive, not less, when accounting for profit margins, and considering the fact that base infrastructure (power, internet) is a given in every home anyway.

There are huge hidden costs in datacenter prices that are simply unnecessary for most casual users of compute. Salaries of staff to maintain datacenters, redundancy and high availability of nine 9s that are simply not required by most customers, as well as real estate costs are all non-existent costs in a homelab setup because those are living costs you pay for anyway, with or without a home server.

gruezabout 1 hour ago
>I can get the same computing power out of a single Mac Mini 32 GB that I get from from an AWS virtual machine that costs hundreds of dollars per month.

This quickly breaks down when you're talking about large models that needs terabytes of memory to run[1]. There's no way that you're going to be able to amortize that for a single person.

[1] https://apxml.com/models/glm-51

GavinAndereggabout 2 hours ago
Author here. The reason I wrote that local hardware is "sipping power most of the time" is because most of the time it's not doing LLM-related work. If you're just using your local machine (or eventually maybe even your phone) to do local LLM tasks, you're not doing that all day.

I agree that data centres will be set up to be more efficient, but we're also going to need fewer of them if local LLMs take off. If that's true, overbuilding data centres is more revenue pressure for AI companies.

benlivengoodabout 1 hour ago
Electricity is more expensive at home than where data centers are built, batch inference is more efficient at GPU/TPU inference per watt, power supplies in data centers are more efficient than in average consumer devices, entire racks can be fully powered off when not in use vs. standby power consumption, and of course the investment in hardware is amortized across many users in data centers. It allows more people to have access to larger models than everyone buying an M3 Ultra.

The economy of scale that data centers have is actually a good thing economically and environmentally for many kinds of demand.

I think that the most capable models will continue to be in high demand across the market until at least "a datacenter of PhDs" level of capability. At that point I can see a transition to more local model use if affordable consumer hardware is available (for the median human on Earth). If that turns out to be true then the hyperscaling will plateau at the level allowing sustained commercial/industrial "PhD"-level demand which we aren't at yet (all providers are still struggling to meet current demands).

Almuredabout 2 hours ago
Fully agree with you, smaller model are great for some tasks but the security concern on injection prompts etc is what really makes it for me. Great to run offline tasks etc, but whenever interacting outside the local network I still run Claude or ChatGPT depending on the task
gpapilionabout 1 hour ago
So recently I moved from a Anthropic model to a qwen 3.5 model running on my Mac to summarize ticket activity over 7 days. I used to do this manually with a colleague and it would take us a couple hours to go through. Opus took 58 seconds, and Qwen took 2.5 minutes. The quality of the qwen output was comparable, but the there was a 2.5x difference in time.

All that said I actually don’t think that matters much. I think we are dragging attention economy concepts in to ai responses, and it doesn’t matter. Both options saved me hours per week, and the difference between 3 and 1 minute may not be worth the additional cost.

Also there are times when the model output is much better with anthropic, but it’s not all the time. I think it becomes a question should we be using the best model for all questions?

busfahrer27 minutes ago
Out of curiosity, what size Qwen did you use, at what quantization?
gpapilion20 minutes ago
27b fp4.
Almuredabout 3 hours ago
I have been talking about this with a colleague this morning. The 20$ option is just a trail version, I could not do any real work with.

And I wonder whether then subscription model is just a way to create a demand for API. For example, I’m building this portal with the support of an LLM for coding, but then I will need to have an LLM using API token to run the platform giving them additional revenue, a demand that did not exist without the coding I did with the subscription.

awediseeabout 3 hours ago
I get the article and the take and I don't think you are wrong, but I would like you to further your thinking and come up with some improvements or fixes.

I get it you may not work in this industry or know the workings of how an AI company seeking frontier AGI WOULD operate but its helpful in connecting ideas and concepts by adding a proposed solution if for nothing more than to show the direction of your thinking.

Sure some people may talk smack about your idea but I've learned that the difference between someone who complains for the argument of complaints and those who complains to fix things have different forms of thinking. The latter may be wrong but its an indicator of HOW that person thinks which is always valuable.

Thanks for the blog.

GavinAndereggabout 2 hours ago
Author here. I'm not complaining or trying to talk smack. I'm just pointing out something that seems to be coming: LLM price increases or the products being degraded to make more revenue. If I knew how to solve that, I'd be insanely wealthy.