Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

33% Positive

Analyzed from 328 words in the discussion.

Trending Topics

#data#abliteration#heretic#alibaba#should#hacker#models#prevent#https#still

Discussion (14 Comments)Read Original on HackerNews

akerstenabout 3 hours ago
2024 which is ancient history. This is not true anymore, the models now are trained to prevent abliteration by spreading out the refusal encoding

See https://arxiv.org/abs/2505.19056

0xkvyb21 minutes ago
Still crazy how easy it is to "jailbreak" even SOTA LLMs with a simple assistantResponse replacement in chat thread.
Der_Einzigeabout 2 hours ago
That doesn't stop/prevent abliteration. The creator of XTC/DRY is also a chad who makes sure that you really can access the full model capabilities. Censorship is the devil.

https://github.com/p-e-w/heretic

adrian_babout 1 hour ago
It is an arms race.

For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).

Of course, eventually someone might succeed to find methods that also work with those.

Der_Einzige38 minutes ago
Proof?
RRRAabout 2 hours ago
It was pretty funny to see Qwen 3.6 (heretic) tell me about how many death the Chinese government thought happened at Tiananmen Sq. on April 15th 1989.

Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...

arcfourabout 2 hours ago
I don't think it's unreasonable to imagine that Alibaba is allowed to scrape the wider internet, or that some research institution is and then Alibaba got data from them.

What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.

SoKamilabout 2 hours ago
No wonder this data is in LibGen.
akerstenabout 1 hour ago
Agreed on all fronts, I should have been more precise that this particular vector was mitigated
beaker52about 1 hour ago
I have had LLMs refuse several of my requests. I still got my answers, but at least they tried.
NewsaHackO28 minutes ago
Yea, I was asking a SOTM about copy.fail, and it was freaking out, and tried to indirectly call me a hacker a few times. Weirdly, all I did was slightly reword requests, and they all went through. Granted, I am not actually a hacker, so I guess my follow-up questions made it realize that I am asking for educational purposes, but it was definitely the most accusatory, curt, and outright abrasive I have seen an LLM behave.
whynotmaybe16 minutes ago
I've been able to have deepseek give me an unofficial account of what happened on Tiananmen square in 1989.

It even went as far as confirming that we should always base our opinion on multiple sources, not just the government.

We should create badges like "script kiddie", "llm hacker", "grandpa's printer adjuster"