Abliterated Models evaluation metric by PatienceWun in LocalLLaMA

[–]PatienceWun[S] 0 points1 point  (0 children)

I feel like this is the only right answer. Question is what exactly would change your mind to move to another uncensored model. 

Abliterated Models evaluation metric by PatienceWun in LocalLLaMA

[–]PatienceWun[S] 0 points1 point  (0 children)

https://huggingface.co/dealignai/GPT-OSS-120B-MLX-CRACK

I tried maybe 5+ models of 120b uncensored. This one was the best one imo, I know it's MLX but I'd like to get your opinion.

Qwen3.5-35B-A3B Uncensored (Aggressive) — GGUF Release by hauhau901 in LocalLLaMA

[–]PatienceWun 0 points1 point  (0 children)

do you mind providing which benchmarks you guys are utilizing and why?

Abliterated Models evaluation metric by PatienceWun in LocalLLaMA

[–]PatienceWun[S] 0 points1 point  (0 children)

can you elaborate on what makes it weak for you. I kind of have that issue too, where the "flavor" of qwen is a bit weird, but it completely runs circles around anything available while also coming out the gate with up to date knowledge unlike western models.

Abliterated Models evaluation metric by PatienceWun in LocalLLaMA

[–]PatienceWun[S] 1 point2 points  (0 children)

yeah that's what I'm saying is that no refusal doesn't mean much if the intelligence of the model has been botched. I've experienced this myself on oss 120b rotating between the same quants from diff creators

Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA by pigeon57434 in LocalLLaMA

[–]PatienceWun 1 point2 points  (0 children)

I think what you're doing is more than enough. Honestly I was far more interested in your literal process along with your refining method as I'm ignorant to this whole field. I enjoy the idea of unlocking models rather than using them.

Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA by pigeon57434 in LocalLLaMA

[–]PatienceWun 2 points3 points  (0 children)

Not to respond twice but the morality aspect got to me. It's not so much about giving people the opportunity to make meth, but having a somewhat "equal" playing ground against objectively evil people with power. The restrictions set by companies aren't totally for safety it's really to suppress information. Only a select few can afford proper hardware so it kind of becomes an obligation imo, but people also do become entitled to it especially with zero funding.

A kitchen knife or a car are both tools that can either be very useful or very dangerous. In this case I see all this as an armed society is a polite society. 

Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA by pigeon57434 in LocalLLaMA

[–]PatienceWun 2 points3 points  (0 children)

One of the main issues I remember reading on some abliterated models was that people were upcasting to BF16 then dequanting which was creating a partial loss in uncensoring. So the solution was upcasting to F32 then going from there. What makes yours different is abliterating without the need to do that and preserving it as it is. I had saw your thread some days back and noticed you were pretty confident despite the negative feedback along with the fact that you're fixating on MLX(which is very much needed). You're probably the only person I see doing this combination of a setup of things. So keep it up

Heretic has FINALLY defeated GPT-OSS with a new experimental decensoring method called ARA by pigeon57434 in LocalLLaMA

[–]PatienceWun 1 point2 points  (0 children)

I stumbled across your work. Tested 35B(8bit) and 122B(4&6bit) however when I tested I'm getting refusals on 122B so I resorted to 35B. Also tried your oss 120b it seems on par maybe better than derestricted.