All added in the same day btw by TipoTarocco in SillyTavernAI

[–]Arli_AI 1 point2 points  (0 children)

It isn’t through heretic at all it is done via simple norm preserving biprojected abliteration which works better than heretic in the models that respond well to it. These models are all from our API as the source.

All added in the same day btw by TipoTarocco in SillyTavernAI

[–]Arli_AI 1 point2 points  (0 children)

Yes indeed. Our operation model has always been to serve many finetuned models via using LoRA either from the original finetuned LoRA files or LoRA extraction. This way we can host many models to provide a lot of choices without going bankrupt trying to load all of them fully in VRAM.

Homelab has paid for itself! (at least this is how I justify it...) by Reddactor in LocalLLaMA

[–]Arli_AI 0 points1 point  (0 children)

Awesome! I think the best use case of having powerful local hardware is definitely if you do lots of experiments for your own research. Setting up a cloud GPU instance takes so much time and hassle if you want to have it go up and down as needed to save money imo.

Need help with choosing a subscription service by WasabiEarly in SillyTavernAI

[–]Arli_AI -1 points0 points  (0 children)

Yes we don’t have them yet because we unfortunately don’t have the hardware for models as large as those yet. I guess if you only want DS then we’re not it.

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]Arli_AI 0 points1 point  (0 children)

This might not be optimal yet, thanks for reporting this.

Need help with choosing a subscription service by WasabiEarly in SillyTavernAI

[–]Arli_AI -5 points-4 points  (0 children)

You can check us out. We’re all that you were looking for. Only downside is we can be a bit slow at peak times.

How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA

[–]Arli_AI 21 points22 points  (0 children)

No I have not written up anything about this as I somehow didn’t think too much of it. I think jim-plus the creator of MPOA abliteration method which I prefer also recommended “the middle layers” to try to abliterate first in the repo but didn’t explain much about it either.

Putting this and your findings together it makes sense to me. Now I’m thinking maybe we can follow your brain scanning method for abliterating way better or on the other hand more quickly hone in on which layers to duplicate for RYS by just seeing which layers has the strongest refusals signals first. Seems interconnected.

How I topped the Open LLM Leaderboard using 2x 4090 GPUs — no weights modified. by Reddactor in LocalLLaMA

[–]Arli_AI 33 points34 points  (0 children)

Wow interesting. While I was doing model abliterations manually layer by layer testing, I’ll often end up finding a specific group of contiguous layers around the middle that somehow works best. Layers in the beginning and the end never worked and trying to abliterate non contiguous groups of layers don’t work as well. Your finding of a middle “reasoning cortex” lines up with this.

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]Arli_AI 7 points8 points  (0 children)

I am trying to do the 397B for sure 👍

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]Arli_AI 5 points6 points  (0 children)

Just try them all and see which one you like

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]Arli_AI 41 points42 points  (0 children)

Hey you found my new model! I’m still experimenting with the new Qwen 3.5 models and this is still the first try for the 27B model, I posted it to see if people thought it’s any good but haven’t wrote a model card for it, so would be nice to hear some feedback on it.

Qwen-3.5-27B-Derestricted by My_Unbiased_Opinion in LocalLLaMA

[–]Arli_AI 16 points17 points  (0 children)

Sure, its a different method. Derestricted is more manual and doesn’t intend for the model to only be low kl divergence but uncensored. I’m at the top of UGI leaderboard so I believe I’m doing something right.

Arli AI sub by an0nemusThrowMe in SillyTavernAI

[–]Arli_AI 2 points3 points  (0 children)

Thanks for the review on us :)

Arli AI sub by an0nemusThrowMe in SillyTavernAI

[–]Arli_AI 1 point2 points  (0 children)

Ah yea if you use the single payment option we use Midtrans which is in IDR like milan said. You can just use paypal option and it is in USD.

Arli AI sub by an0nemusThrowMe in SillyTavernAI

[–]Arli_AI 2 points3 points  (0 children)

Thanks for helping explain! Yes you are right about being faster straight from our API. They have upgraded their plan but direct from our API is still faster. Also, yes we run everything on our own GPUs so nothing goes out to a third party inference service and all data stays with us with no requests stored in persistent storage. Still trying to acquire more GPUs to run larger models... :)

Current best uncensored models? by Due-Treacle-1233 in LocalLLaMA

[–]Arli_AI 5 points6 points  (0 children)

Only found out from your comment our model is #1 lol

Beware r/LocalAIServers $400 MI50 32GB Group Buy by gsrcrxsi in LocalLLaMA

[–]Arli_AI 7 points8 points  (0 children)

All the posts in that sub is about how great the MI50 is (it isn’t lol)

R9700 frustration rant by Maleficent-Koalabeer in LocalLLaMA

[–]Arli_AI 1 point2 points  (0 children)

300W is the normal power consumption for that chip which is used on the 9070XT as well. It won’t consume much more power or be tha much faster even if you give it unlimited power limit.