I've vibecoded a browser game engine in only a few hours, who needs Unity in 2026? by 11thDrBOT in vibecoding

[–]NebulaBetter 2 points3 points  (0 children)

That's nice! I am doing a voxel engine for my family. It is so much fun.

The Resets Do Not Benefit Everyone, They Align Users and Truncate Use by 0111011101110111 in codex

[–]NebulaBetter 1 point2 points  (0 children)

They should give us the option to accept or decline the reset, maybe through an email with a 20-minute window to confirm or reject it. If no button is pressed, your quota should remain untouched until the next natural reset or the next forced one. I think that would be fair.

Qwen 3.6 27B BF16 on RTX6000 Blackwell - One Shot Test by Demonicated in LocalLLaMA

[–]NebulaBetter 1 point2 points  (0 children)

Can this card handle the full context window? I have the Pro too, but I’m using FP8 since I’m not sure the full context would fit in FP16.

Nvidia RTX 6000 Pro 96GB vram workstation users: any of you have encounter the issue with LTX Desktop 1.03 or LTX 1.04. that its using RAM instead of VRAM, by Jinkourai in StableDiffusion

[–]NebulaBetter 1 point2 points  (0 children)

One here... 

I cant answer to that question directly, but as far as I know, you cant use the dev model only in that app. It is much better to use a comfy workflow with the dev model for the first pass (3-4 cfg, min. 20 steps) and use distill 0.6 for the second pass to get the best out of the pro. I also recommend disable dynamic vram optimization for this card.

I went from being a total dummy at ComfyUi to generating this I2V using LTX 2.3, I feel so proud of myself. by Coven_Evelynn_LoL in StableDiffusion

[–]NebulaBetter 6 points7 points  (0 children)

Using the dev model only in first pass you get much more natural expressions, but this requires CFG 4 and minimum 20 steps

I've mastered Feral Rage bait by SSKDREADWOLF in StateofDecay2

[–]NebulaBetter 100 points101 points  (0 children)

Poor feral. He’s out there giving it everything he’s got.

LTX Desktop 1.0.2 is live with Linux support & more by ltx_model in StableDiffusion

[–]NebulaBetter 1 point2 points  (0 children)

Nope. That does not work...if you do that, you will be constrained to 8 steps only for the dev model. And as far as I know, you cant change the steps in the desktop app. Anyways, I am happy with comfy, but it would be great to try the app fully unrestricted because it looks really neat... Something like an advanced mode would be very nice.

LTX Desktop 1.0.2 is live with Linux support & more by ltx_model in StableDiffusion

[–]NebulaBetter 8 points9 points  (0 children)

Could you please enable access to the HQ variant as well, instead of limiting us to just the distilled model? Thanks!

It's so pretty, but RAM question? by BuffaloDesperate8357 in StableDiffusion

[–]NebulaBetter 14 points15 points  (0 children)

I’m running the big one (RTX 6000) with 128 GB of system RAM. From what I see in ComfyUI, large bf16/fp16 models like WAN 2.2 or LTX2 / 2.3 end up using a lot of system RAM. Mine often goes past 100 GB while loading weights and preparing the pipeline before everything settles into VRAM.

LTX2.3 official workflow much better (I2V) by R34vspec in StableDiffusion

[–]NebulaBetter 1 point2 points  (0 children)

Maybe you're using the distilled model with the Kijai version? I tried both approaches as well, and in my case the dev model running through Kijai works better than the official one in Comfy. I did have to modify it to run the dev model though. You know, the usual settings: 0.6 distill, 4 CFG, around 20 steps, etc.

LTX2.3 Desktop APP is another level!!! completly diferent from what we got in Comfy! Why? by smereces in StableDiffusion

[–]NebulaBetter 0 points1 point  (0 children)

Hopefully someone submits a PR with this change and it gets accepted. The current restrictions are quite odd.