DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
38% Positive
Analyzed from 659 words in the discussion.
Trending Topics
#quantization#https#bit#accuracy#paper#model#models#turboquant#pretty#especially

Discussion (13 Comments)Read Original on HackerNews
That's why things like Autoround, GPTQ, AWQ have been so popular, you don't even need enough hardware to run the original model on gpu, just cpu is enough due to the data efficiency
https://github.com/intel/auto-round/blob/main/docs/gguf_alg_...
[1] https://medium.com/@paul.ilvez/demystifying-llm-quantization...
That's a tall claim. By that measure, even NVIDIA's QAD, which is AFAIK is currently SOTA for 4-bit quantization (albeit NVFP4 instead of INT4) would be worse than Q4_K_M RTN quantization. :D
https://arxiv.org/abs/2601.20088
I call bs on that. Not even FP8 is 99.8 in every scenario. It's close, but not quite bit exact, and to say that you reach 99% with q4 is a stretch. Maybe if all you test is really old benchmark questions that are in every training set out there, but go a bit ood and you'll see your q4 crumble. Try coding in a niche language or something. Or long context math (not 1+2 from the MATH benchmark) not in aime sets, and you'll get a few percentages of accuracy loss for each quant step.
if so that's a pretty drastic trade-off
If we were in any other subfield doing this would be considered cheating and get your paper rejected, but the quantization community really loves to spread FUD claiming that quantization doesn't harm models
Also, similar dynamic with dense vs sparse MoE models. There's a reason we keep getting dense model releases alongside the MoEs out of China.
Quantization is not free, causes significant brain damage (especially on very long contexts), and has enough academic misconduct within it that it's actively screwing up the market. Don't believe me? Go ask your local financial analyst about the markets reaction to TurboQuant and than try to square that circle with this: https://openreview.net/forum?id=tO3ASKZlok (extreme and credible allegations of academic misconduct/fraud)
p.s. dense vs MoE: both are being released because they offer different trade-offs: at the same level of quality, MoE will use less compute, but more memory.
A) the QLJ thing they added is useless and the code they released didn't even include it since it makes the results worse
B) the blogpost about turboquant was ai generated and stated that turboquant used a polar transformation when it doesn't, so for the first 2 weeks people thought turboquant involved a transformation to polar coordinates. The reason the blogpost was wrong was probably because google tried to put their useless polarquant paper on the map by talking about it repeatedly.
C) since they don't use QLJ they wholesale copied the quantization technique from the HIGGS paper without citing it (https://arxiv.org/pdf/2411.17525)
D) the whole rabitQ thing https://openreview.net/forum?id=tO3ASKZlok and google's incompetent and tonedeaf response https://openreview.net/forum?id=tO3ASKZlok¬eId=X882cbyNNM after asking rabitQ for help, then shitting on them in their paper and ignoring their emails when they objected