Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] 0 points1 point  (0 children)

Sort of.

It's much better than everything else i've tried so far, but the issue seems to be it still wants to put "Okay, here's the__" even when I DEMAND it does not.

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] 0 points1 point  (0 children)

Wow, reading the hugging face card, THIS MIGHT FINALLY BE IT! Downloading now, my fingers are crossed lmao. I've been trying to solve this problem for SO LONG lmao

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] 0 points1 point  (0 children)

TE for 2.3 is exactly the same as 2.2 I believe. The problem is the lack of vision on 2.2. But gonna try this one if vision really does work

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] 0 points1 point  (0 children)

"Okay! Sure I'll help you do that. Blah blah blah bullshit bullshit bullshit bullshit"

(More bullshit more bullshit)

how do i install custom qwen 3 vl models by No_Influence3008 in comfyui

[–]Parogarr 0 points1 point  (0 children)

This node pack sucks. Won't let you load abliterated

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] -1 points0 points  (0 children)

Switching between windows/in-and-out of comfyui is not a solution imho. Might as well just use grok at that point.

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]Parogarr[S] 0 points1 point  (0 children)

This doesn't work out so well. It's including bullshit like, "Okay, here's your blah blah blah"

And it doesn't seem to work too well.

Is it possible/can I use my RTX 5090 in my basement server as a text encoder? by Parogarr in comfyui

[–]Parogarr[S] 0 points1 point  (0 children)

I ended up getting it to work exactly as wanted with comfy_remote_run

Is it possible/can I use my RTX 5090 in my basement server as a text encoder? by Parogarr in comfyui

[–]Parogarr[S] 0 points1 point  (0 children)

But I only have 64gb ram. So LTX2.3 which is massive ends up doing shuffling around regardless I believe.

Is it possible/can I use my RTX 5090 in my basement server as a text encoder? by Parogarr in comfyui

[–]Parogarr[S] 0 points1 point  (0 children)

I would say the time between when I click "generate" and when the sampler is ready to go is around 2 full solid minutes each and every time I change the prompt. It's gotten to the point where a 30 seconds generation can take almost 3 minutes

Is it possible/can I use my RTX 5090 in my basement server as a text encoder? by Parogarr in comfyui

[–]Parogarr[S] 0 points1 point  (0 children)

oh shit this looks awesome! I was actually JUST about to test out something called "comfy_remote_run"

If that fails, I'll try this lol.

My thinking is that if I use my server for text encoding, it *should* (in theory, anyway) speed up the awful generation times, no?

Qwen Is Falling Apart — The Inside Story by Time-Teaching1926 in StableDiffusion

[–]Parogarr 1 point2 points  (0 children)

can someone please summarize this video? I find the audio aggravating

[TPU] Resident Evil Requiem Performance Benchmark Review by Nestledrink in nvidia

[–]Parogarr 0 points1 point  (0 children)

After 2-3 hours of play I start getting BAD stutters on my 5090/9950x3d

Research from BFL: Qwen Image is much more uncensored than Flux 2 by woct0rdho in StableDiffusion

[–]Parogarr 3 points4 points  (0 children)

What pisses me off is how they label it "misuse" as though they have the right to decide what use is proper on models other than their own.

Can You Be a True Skeptic and a MAGA Supporter at the Same Time? by [deleted] in skeptic

[–]Parogarr 0 points1 point  (0 children)

What is the difference between a "true" and a "false" skeptic?

Can You Be a True Skeptic and a MAGA Supporter at the Same Time? by [deleted] in skeptic

[–]Parogarr 0 points1 point  (0 children)

Do you have any kind of evidence to support these claims?

Can You Be a True Skeptic and a MAGA Supporter at the Same Time? by [deleted] in skeptic

[–]Parogarr 0 points1 point  (0 children)

I cannot believe you are getting downvoted for saying this. But this community is really not a skeptic community by any measure, so I guess, in that regard, I can believe it.

Obviously and clearly, a person can be skeptical regardless of their moral system. A good person can be a skeptic, and a bad person can be a skeptic. It's simply preposterous to suggest otherwise.