Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

80% Positive

Analyzed from 545 words in the discussion.

Trending Topics

#run#strix#halo#amd#numbers#page#gguf#ram#unsloth#something

Discussion (9 Comments)Read Original on HackerNews

spoaceman777737 minutes ago
I'm somewhat confused as to why this is on the front page. It doesn't go into any real detail, and the advice it gives is... not good. You should definitely not be quantizing your own gguf's using an old method like that hf script. There are lots of ways to run LLMs via podman (some even officially recommended by the project!). The chip has been out for almost a year now, and its most notable (and relevant-to-AI) feature is not mentioned in this article (it's the only x86_64 chip below workstation/server grade that has quad-channel RAM-- and inference is generally RAM constrained). I'm also quite puzzled about this bit about running pytorch via uv.

Anyway. I wouldn't recommend following the steps posted in there. Poke around google, or ask your friendly neighborhood LLM for some advice on how to set up your Strix Halo laptop/desktop for the tasks described. A good resource to start with would probably be the unsloth page for whichever model you are trying to run. (There are a few quantization groups that are competing for top-place with gguf's, and unsloth is regularly at the top-- with incredible documentation on inference, training, etc.)

Anyway, sorry to be harsh. I understand that this is just a blog for jotting down stuff you're doing, which is a great thing to do. I'm mostly just commenting on the fact that this is on the front page of hn for some reason.

roenxiabout 1 hour ago
I thought the point of something like Strix Halo was to avoid ROCm all together? AMDs strategy seems to have been to unify GPU/CPU memory then let people write their own libraries.

The industry looks like it's started to move towards Vulkan. If AMD cards have figured out how to reliably run compute shaders without locking up (never a given in my experience, but that was some time ago) then there shouldn't be a reason to use speciality APIs or software written by AMD outside of drivers.

ROCm was always a bit problematic, but the issue was if AMD card's weren't good enough for AMD engineers to reliably support tensor multiplication then there was no way anyone else was going to be able to do it. It isn't like anyone is confused about multiplying matricies together, it isn't for everyone but the naive algorithm is a core undergrad topic and the advanced algorithms surely aren't that crazy to implement. It was never a library problem.

ankoabout 1 hour ago
I would be interested to know what speeds you can get from gemma4 26b + 31b from this machine. also how rocm compares to triton.
everlierabout 2 hours ago
owning GGUF conversion step is good in sone circumstances, but running in fp16 is below optimal for this hardware due to low-ish bandwidth.

It looks like context is set to 32k which is the bare minimum needed for OpenCode with its ~10k initial system prompt. So overall, something like Unsloth's UD q8 XL or q6 XL quants free up a lot of memory and bandwidth moving into the next tier of usefulness.

IamTCabout 1 hour ago
Nice. Thanks for the writeup. My Strix Halo machine is arriving next week. This is handy and helpful.
timmy777about 2 hours ago
Thanks for sharing. However, this missed being a good writeup due to lack of numbers and data.

I'll give a specific example in my feedback, You said:

``` so far, so good, I was able to play with PyTorch and run Qwen3.6 on llama.cpp with a large context window ```

But there are no numbers, results or output paste. Performance, or timings.

Anyone with ram can run these models, it will just be impracticably slow. The halo strix is for a descent performance, so you sharing numbers will be valuable here.

Do you mind sharing these? Thanks!

gesshaabout 2 hours ago
This is more of a “succeeding to get anywhere close to messing around” rather than “it works so now I can run some benchmarks” type of article.
l33tfr4gg3rabout 2 hours ago
To give benefit of doubt, author does state multiple times (including in the title) that these were "first impressions", so perhaps they should have mentioned something like "...In the next post, we'll explore performance and numbers" to avoid a cliffhanger situation, or do a part 1 (assuming the intention was to follow-up with a part 2).
JSR_FDEDabout 2 hours ago
Perfect. No fluff, just the minimum needed to get things working.