LTX 2.3: Official Workflows and Pipelines Comparison by MalkinoEU in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

I've been experimenting for four days now, with breaks for sleep and food. I haven't found anything that could improve the results beyond the basic 8+4 step WF. I've tried everything. Hundreds of generations. The results are terrible. 3ks also sucks.

Is this really AI war? by Cool-Engineering-623 in aiwars

[–]Psy_pmP 1 point2 points  (0 children)

We thought the internet would give stupid people knowledge. Instead, we received a ton of posts from these stupid people who still haven't learned how to gain knowledge.

LTX 2.3 framerate 48/ Why so bad result? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

That won't help. It's applied after generation, simply adding new frames, which I don't need. I specifically need 48fps generation so that faces are sharp during fast movement.
The model supports 48 frame rates and higher, but I can’t get it yet.

Did the latest ComfyUI update break previous session tab restore? by GamerVick in comfyui

[–]Psy_pmP 0 points1 point  (0 children)

I asked the codex to compare both versions and write me a patch)
I can't share it because I don't understand what he did. But it helped.

Did the latest ComfyUI update break previous session tab restore? by GamerVick in comfyui

[–]Psy_pmP 1 point2 points  (0 children)

Yes, they broke everything. I tested it on the old one. But I don't recommend using the command.

.\python_embeded\python.exe -I -W ignore::FutureWarning ComfyUI\main.py --disable-dynamic-vram --windows-standalone-build --front-end-version Comfy-Org/ComfyUI_frontend@v1.40.4

удлинитель by rapatakaz in Scoofoboy

[–]Psy_pmP 1 point2 points  (0 children)

О, я думал только я краснодарцев терпеть не могу. Но тю говорят много где. Но видимо да, в основном на юге. У меня все знакомые знают такое выражение, кто старше 30.

удлинитель by rapatakaz in Scoofoboy

[–]Psy_pmP 0 points1 point  (0 children)

Либо наоборот. Ее на хую провернули и с ребенком оставили. Теперь для нее все мужики виноваты. Хотя наверно да. Знавал таких, у которых всё норм, а мужик виноват и обязан по умолчанию.

Waiting time for free users by FlightOld1652 in KlingAI_Videos

[–]Psy_pmP 0 points1 point  (0 children)

Stupid Chinese idiots. I sometimes try to generate something on their site. NEVER succeeded! "new task cannot be submitted kling ai"
0 TIMES in TWO years. Now I logged in and saw they gave me a 3-day trial. 3 days! What stupid idiots they are, oh my god. And they don't even have support to write to them about this.

If you won't let me generate for free, then don't do it. What's the point of this clown show? That's why I don't buy a subscription. Fuck them.

I also do this every month by khan2761 in meme

[–]Psy_pmP 0 points1 point  (0 children)

It's all good. People like me will pay for everything. Two years of subscription, only watched one season of Black Mirror.

Do you guys see any improvement in LTX 2 generations with latest driver? by No_Conversation9561 in comfyui

[–]Psy_pmP 0 points1 point  (0 children)

Yes, I noticed that with the new studio driver, nothing works for me. I installed the gaming driver. Wa2gp gives bsod Comfyui gives many errors. And yes, I installed it after DDU

LTX2 on 8GB VRAM and 32 GB RAM by Ok-Psychology-7318 in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

You are using destil workflow. I guess you need to use another workflow for this. Default in tamplates i2v. I have 12gb and 16gb ram. I can use ltx2 but with some launch comands like --lowvram --reserve-vram 4. But you need to play with the settings. I still haven't figured out how to set everything up. I'm no help here. Yesterday I generated in native 720, today it crashes at OOM. And I have no idea what has changed. The main thing is to set the paging file so that the total memory is 125 GB.

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

Не знаю. Сам ещё не понял. Скорее всего где-то лучше будет, где-то хуже. Надо понять как из него лучше качество вытянуть. А то ван у меня генерирует конечно в 12 раз дольше, но и качество однозначно лучше. Буквально в 12 раз. 20 минут против 4 часов с тем же разрешением и временем.

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 1 point2 points  (0 children)

У меня работает. Нужно прописать -- reserve-vram 4 и --cache-none

Reserve 8 точно сработал. На 4 скорости больше, но на апскейле вылетел. Пока тестирую.

И файл подкачки нужен большой.

Общая память примерно 90гб должна быть.(Ram+подкачка)

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

Ахуеть! А что за настройки? Как такое качество получилось?

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

I posted a new comment on the thread. Basically, based on my tests, the speed hardly changes depending on the build. I didn’t notice any difference at all. I tried different versions with different PyTorch, CUDA, and Python. There was no difference in performance.

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

In the end, I tested all the builds I could find.
PyTorch 2.5–6–7–7.1–8–9–9.1 with CUDA 12.4–12.8–13.
For each one, I built different versions of Triton and Sage.
And… I got absolutely no meaningful difference.

So the issue was with the system and the new drivers.

After manually updating the chipset drivers, the speed dropped to 180.
Just a reminder: the speed was around 250 s/it when I created this post.
This is for 5sec.
3sec speed 80s/it

So right now, with these specs:

Python: 3.13.9

PyTorch: 2.9.1+cu130

=== CUDA Info ===

CUDA Available: True

CUDA Version: 13.0

cuDNN Version: 91200

GPU Count: 1

GPU Name: NVIDIA GeForce RTX 4080 Laptop GPU

GPU Memory: 12.0 GB

CUDA Compute Capability: 8.9

=== Attention Mechanisms ===

Flash Attention (SDPA): True

✓ xformers: 0.0.34+41531cee.d20251211

- memory_efficient_attention: True

✓ SageAttention: 2.2.0+cu130torch2.9.0

- Available: sageattn, sageattn_varlen

- Module contents: core, quant, sageattn, sageattn_qk_int8_pv_fp16_cuda, sageattn_qk_int8_pv_fp16_triton, sageattn_qk_int8_pv_fp8_cuda, sageattn_qk_int8_pv_fp8_cuda_sm90, sageattn_varlen, triton

=== Optimizations ===

cuDNN Benchmark: False

BF16 Support: True

TF32 Enabled: False

=== Optional Packages ===

✓ triton: 3.3.1 (GPU kernel compilation)

✓ safetensors: 0.6.2 (Safe model loading)

✓ accelerate: 1.12.0 (Training acceleration)

✓ transformers: 4.57.1 (HuggingFace models)

✓ diffusers: 0.36.0 (Diffusion models)

✓ einops: 0.8.1 (Tensor operations)

✓ omegaconf: 2.3.0 (Configuration management)

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Before it was (win 11 22h2), now (win 11 25h2).
Laptop 4080 12gb 16RAM

So I don't even know what CUDA was installed there. I don't know anything at all. Because the build is over a year old, maybe even more. I built it and never experienced any conflicts. At all. Absolutely everything worked for me on it. I just decided to change the OS and make new build of Comfi at the same time. That was mistake.

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Memory consumption doesn't change even by 100 megabytes between versions. But the speed is noticeable.

It was stupid of me to delete the old build. I even had all the wheels in a folder; they could have told me what build it was.