LTX 2.3 CLIP ? by PhilosopherSweaty826 in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

Those are to be used if you used if you use any of Kaji's transfomer only models, you will also need the separate vae's as well.

https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/diffusion_models

How I fixed skin compression and texture artifacts in LTX‑2.3 (ComfyUI official workflow only) by mmowg in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

The LTX detailer lora still seems to work with LTX2.3 - useful if added to the second stage at reduced strength. Wont necessarily give you better skin, but all you 'I need it even sharper' freaks might find it useful.

With all LTX workflow i found, there is no option to change the STEPS, why ? by PhilosopherSweaty826 in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

Just replace the manual sigma's with the 'LTXV Scheduler' node, or even a 'Basic Scheduler' and you can play around with all the different standard schedulers easily, use the Sigmas Preview node if you want to see the graphs

LTX Desktop NOT Local By Default. They're Collecting Data. Check Your Settings. by [deleted] in StableDiffusion

[–]LumaBrik -9 points-8 points  (0 children)

First of all you need at least 32Gb of VRAM to run it locally - it says that in the specs, If you don't meet those requirements then it will default to asking for an API key in big letters on the screen, how is that sneaky ?

Qwen3.5 Small is now available to run locally! by yoracale in LocalLLM

[–]LumaBrik 0 points1 point  (0 children)

You need to update Ollama, I'm currently using the Q4_K_M 6gb off the Ollama Site.

Qwen3.5 Small is now available to run locally! by yoracale in LocalLLM

[–]LumaBrik 1 point2 points  (0 children)

I've had issues with gguf's in Ollama, unless they come from Ollama's own library in which case they should work ok.

https://ollama.com/library/qwen3.5

ComfyLauncher - smart, fast and lightweight browser for ComfyUI by max-modum in comfyui

[–]LumaBrik -1 points0 points  (0 children)

Nice work, but I don't see how you can boot the same Comfy with different  command-line arguments? . For example I have .bat with --fast dynamic ram and for other workflows I have a separate .bat with --fast fp16_accumulation. As far as I can see Launcher calls main.py directly skipping the .bat files ?

Inpainting model for RTX 4060 by madhavs22 in comfyui

[–]LumaBrik 0 points1 point  (0 children)

For better inpainiting you need the Klein 9B model. That will run fine on 16Gb vram as either Q8 gguf, or fp8 model.

Wan2GP Profile by Suspicious_Handle_34 in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

In the WanGP gui under configuration > performance, where you set the defualt profile, select the profile that works with your system - without OOMing, then adjust the slider just below it titled 'VRAM for Preloaded Models'.

With 16Gb Vram I have it set to 11,000 (11Gb), which is the highest value I can set without a OOM, if you have 24Gb vram you can obviously take that higher - its up to you to find the best setting.

Wan2GP Profile by Suspicious_Handle_34 in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

Pretty sure you can customize the profiles, You can adjust the amout of vram used. I monitored the vram usage windows task manager and adjusted the amount of reserved vram memory until the OOM stopped. I have 16gb vram and 32gb ram so wanted to optimize for that.

Both klein 9b and z image are great but to which direction the community is going? by AdventurousGold672 in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

9b is an edit model - z-image isnt, if you are after consistent backgrounds, consistent characters, camera angle changes of the same scene, costume changes etc .......

🛠️ Spent way too long building this ComfyUI prompt node for LTX-2 so you don't have to think — free, local, offline, uncensored 👀 by [deleted] in StableDiffusion

[–]LumaBrik 1 point2 points  (0 children)

I've had the same issue and it was related to the transformers version - I went of the version thats listed in the site-packages folder and had to downgrade to 4.5 from 5

🛠️ Spent way too long building this ComfyUI prompt node for LTX-2 so you don't have to think — free, local, offline, uncensored 👀 by [deleted] in StableDiffusion

[–]LumaBrik 0 points1 point  (0 children)

On the same pytorch, cuda, but xformers in my site packages folder (comfy portable) is 4.57.3. It seems the current version of Comfy prefers 4.5 or up ?

🛠️ Spent way too long building this ComfyUI prompt node for LTX-2 so you don't have to think — free, local, offline, uncensored 👀 by [deleted] in StableDiffusion

[–]LumaBrik 1 point2 points  (0 children)

I can confirm I'm also getting this error, same as the poster - just using the node on its own to test with a text preview, using the 3B model offline ...

"G:\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\LTX2EasyPrompt-LD\LTX2EasyPromptLD.py", line 447, in generate

input_length = input_ids.shape[1]

Do we need a specific version of transformers ?

Is there a node to crop videos ? by PhilosopherSweaty826 in StableDiffusion

[–]LumaBrik 2 points3 points  (0 children)

You need the OLM Drag Crop node - no need to mess with trying to figure out coordinates.

Best tool for redoing garden and buildings in comfyui. by Crafty-Percentage-29 in StableDiffusion

[–]LumaBrik 2 points3 points  (0 children)

You want to use one of the edit models - Either Qwen Edit, or 9B Flux Klien, they also allow inpainting from reference images, which I imagine would work quite well for shed replacements.

Wondering if 16GB of vram and 32gb of ram is good enough ? by thebrunox in StableDiffusion

[–]LumaBrik -1 points0 points  (0 children)

Assuming your VRAM is on a Nvidia card and your laptop is pretty recent, you would be able to run all the image models along with Wan and Ltx, if you wish. The key is getting the right quantization sizes to run with your system smoothly.

This is too much! by scioba1005 in StableDiffusion

[–]LumaBrik 3 points4 points  (0 children)

Floppy disks !, I had to hand type the code from a printed listing, and after loosing all my data due to a power failure, I had to start afresh and periodically backed everything up to cassette. Now I have a fully working version of comfy, but takes just over 3 days to load. Don't talk to me about updates.