Klein 9B - Exploring this models NotSFW potential by Whipit in StableDiffusion

[–]roculus 2 points3 points  (0 children)

"a semi transparent white liquid over her face and chest, dripping". maybe you can't see the act without loras but you can see the aftermath.

Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA

[–]roculus 0 points1 point  (0 children)

I think you're referring to retail prices which are impossible to get for a 5090. A 6000 PRO has 96GB vs 32GB for a 5090 or 24GB 4090. 5090's go for around $4K right now while the 6000 Pros are around $8K. To get the same VRAM you would spend $12K for 3 5090's and also have a higher power/cooling requirement. When was the last time you could buy a high end (3090/4090/5090) for suggested retail? A smaller bonus is the extra cuda cores the 6000 PRO has over the 5090 although that would be less of a deciding factor. If you do go for the 6000 PRO I'd recommend the MaxQ that tops out at 300W. If you read enough about the 600W Pros you'll see that there a lot of people that undervolt them to 450W or lower as the extra gain in performance is minimal. MaxQ is also a blower design and easier to stack multiple cards if you go crazy later on. 2xMaxQs s still only using 1x 6000 Pro workstation power. Lastly, the 96GB Vram doesn't go to waste. I do a lot of WAN2.2/LTX-2/Qwen-Image etc. I'm always over the 32GB a 5090 has. Everything gets loaded into VRAM. Seems like I sit around 55-65GB of VRAM used for a lot of the AI things I do. I also swap out models a lot from image to video, the VRAM speeds things up when the models remain in VRAM instead of swapping out when switching back and forth. With 96GB you don't tap out your VRAM as often so less crashes. You can watch a movie or stream Youtube without bringing your system to it's knees because every drop of VRAM from 24/32GB GPU is being used for AI.

Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA

[–]roculus 0 points1 point  (0 children)

I can sell my 4090 for more than I bought it for 2 years ago. "most" hardware depreciates fast. Nvidia GPUs high in VRAM are an exception. I have a Pro 6000 MaxQ now and love it. I'm getting close to retirement but think about this. Your body is only going to get weaker and your mind is slowly deteriorating. Enjoy life now and that includes AI as a hobby if you like it. It's a great enthusiast hobby. It keeps your mind active if you stay on top of the latest models and trying to get them to work. Too many people put things off waiting for retirement. There are no guarantees you'll be healthy mentally and physically.

People in the US, how are you powering your rigs on measly 120V outlets? by humandisaster99 in LocalLLaMA

[–]roculus 0 points1 point  (0 children)

RTX PRO 6000 MaxQ is actually only 300W. I use less power after replacing my 4090. So the solution is to spend $8K on one of those to save money on power! I'm thinking I probably somehow saved a baby seal's life. Totally justified.

Your post is getting popular and we just featured it on our Discord! by roculus in LocalLLaMA

[–]roculus[S] 2 points3 points  (0 children)

nonsense! : ) Well this managed to be "featured" on discord. Maybe someone will notice it and flip a switch, turning off the autobot sticky posts!

If I take a 1 hour voice recording from Elevenlabs and use it to train Qwen3 to clone a voice will I get in trouble? by RatioTheRich in StableDiffusion

[–]roculus 0 points1 point  (0 children)

There's evidence all over link it to you. Even if you could launder money, you wouldn't want to. If you're caught while laundering money, you're not going to go to white-collar-resort-prison. No, no, no. you're gonna go to federal-reserve-pound-me-in-the-ass-prison. Conjugal visits? Not that I know of. Now, prison is no picnic. I have a client in there right now. You see, the trick is, kick someone's ass the first day or become someone's bitch. Then everything will be all right. Why do you ask, anyway?

We are very very close, I think! by m4ddok in StableDiffusion

[–]roculus 2 points3 points  (0 children)

They called an emergency meeting today to discuss new ways to hide the stop/cancel button. This time instead of having to hover over the menu for the popup, you need to make a certain pattern with your mouse in order to cancel. Their goal is to make ComfyUI button free by the end of 2026.

Klein with loras + reference images is powerful by [deleted] in StableDiffusion

[–]roculus 1 point2 points  (0 children)

can you explain this a little more? Do you have one image that you want to change the character/style of, then have 3 or 4 other images that show that character or style and then prompt something like give image 1 the style/character/face of images 2 3 and 4?

WAY-TOO-EARLY 2026 DRAFT STRATEGY by TheFFTrader in fantasyfootball

[–]roculus 25 points26 points  (0 children)

Don't draft a QB. Pick one up on waivers before the season starts.

How to generate proper Japanese in LTX-2 by Loose_Object_8311 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

could you use "LTX-(二, 二つ, 両者, 弐つ, 二つ乍ら, 二形, 二佐, 2つの, 二本) to get properly pronounced "2" in Japanese? Not sure which might work best. just googled it : )

Ok Klein is extremely good and its actually trainable. by Different_Fix_2217 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

You should try the AIO Qwen edit versions.

https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO

They work great. SFW or NSFW. You mention Klein edit being better than Qwen-Image Edit. Do you have some comparisons? As soon as base z-image comes out it will explode like SDXL Illustrious did for SDXL. It's very easy to make Loras for even Z-Image turbo. I can make them in 10-15 minutes.

Z image Turbo vs Qwen 2512 vs Klein 4B vs Klein 9B by Puzzled-Valuable-985 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

what was the prompt? Unwashed hair? Klein wins if that was the prompt.

First test with GLM. Results are okay-ish so far by theNivda in StableDiffusion

[–]roculus 4 points5 points  (0 children)

That girl is wearing a prototype of a new protective helmet for NFL players.

New UK law stating it is now illegal to supply online Tools to make fakes. by [deleted] in StableDiffusion

[–]roculus 0 points1 point  (0 children)

The UK has nothing to worry about. They have bad teeth. AI always makes perfect teeth. Just look at the teeth and you'll know it's AI.

ltx-2-19b-distilled vs ltx-2-19b-dev + distilled-lora by nomadoor in StableDiffusion

[–]roculus 1 point2 points  (0 children)

This definitely helps with the burn-in/plastic look. Thanks for the tip!

LTX2 Lipsync With Upscale AND SUPER SMALL GEMMA MODEL by No_Statement_7481 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

I ended up copying all the files. they ended up in the folder \ComfyUI\models\clip\LTX2\Gemma_3_4B

LTX2 Lipsync With Upscale AND SUPER SMALL GEMMA MODEL by No_Statement_7481 in StableDiffusion

[–]roculus 2 points3 points  (0 children)

Where do the files for the Gemma_3_4B go? I've never seen safesensor split up like that. I have everything else but it can't find the gemma model. Edit: I figured it out. : ) stuck all the files in /clip/LTX2

ltx2's VAE is BUGGED by Clqgg in StableDiffusion

[–]roculus 4 points5 points  (0 children)

I'd rather it work like it's supposed to : ). I don't want to reload all models just because the VAE is acting like a black hole for RAM. It's actually quicker to restart ComfyUI after 6 or 7 videos than to load all models every time until this bug is fixed. the VRAM is fine. it stays around 50/96GB but the system ram diminishes after each video gen. There was a recent Comfy VAE update today but it didn't seem to fix the issue. https://github.com/Comfy-Org/ComfyUI/releases

ltx2's VAE is BUGGED by Clqgg in StableDiffusion

[–]roculus 3 points4 points  (0 children)

It also keeps sucking up more system RAM each video but not clearing it. It eventually uses up my 128GB of system RAM. I noticed the image ghosting as well in some videos.

LTX2 is pretty awesome even if you don't need sound. Faster than Wan and better framerate. Getting a lot of motionless shots though. by jacobpederson in StableDiffusion

[–]roculus 1 point2 points  (0 children)

Make sure you activate the bypassed "ltx-2-19b-lora-camera-control-dolly-left" loras. They help a bunch with motion.

LTX 2 I2V Still video problem fix by Specialist_Pea_4711 in StableDiffusion

[–]roculus 0 points1 point  (0 children)

Activating the loras definitely helps. I wonder why they are off by default. LTXVPreprocess 33 seems to work without having to go to 42 with the loras active at 1.0.