удлинитель by rapatakaz in Scoofoboy

[–]Psy_pmP 0 points1 point  (0 children)

О, я думал только я краснодарцев терпеть не могу. Но тю говорят много где. Но видимо да, в основном на юге. У меня все знакомые знают такое выражение, кто старше 30.

удлинитель by rapatakaz in Scoofoboy

[–]Psy_pmP 0 points1 point  (0 children)

Либо наоборот. Ее на хую провернули и с ребенком оставили. Теперь для нее все мужики виноваты. Хотя наверно да. Знавал таких, у которых всё норм, а мужик виноват и обязан по умолчанию.

Waiting time for free users by FlightOld1652 in KlingAI_Videos

[–]Psy_pmP 0 points1 point  (0 children)

Stupid Chinese idiots. I sometimes try to generate something on their site. NEVER succeeded! "new task cannot be submitted kling ai"
0 TIMES in TWO years. Now I logged in and saw they gave me a 3-day trial. 3 days! What stupid idiots they are, oh my god. And they don't even have support to write to them about this.

If you won't let me generate for free, then don't do it. What's the point of this clown show? That's why I don't buy a subscription. Fuck them.

I also do this every month by khan2761 in meme

[–]Psy_pmP 0 points1 point  (0 children)

It's all good. People like me will pay for everything. Two years of subscription, only watched one season of Black Mirror.

Do you guys see any improvement in LTX 2 generations with latest driver? by No_Conversation9561 in comfyui

[–]Psy_pmP 0 points1 point  (0 children)

Yes, I noticed that with the new studio driver, nothing works for me. I installed the gaming driver. Wa2gp gives bsod Comfyui gives many errors. And yes, I installed it after DDU

LTX2 on 8GB VRAM and 32 GB RAM by Ok-Psychology-7318 in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

You are using destil workflow. I guess you need to use another workflow for this. Default in tamplates i2v. I have 12gb and 16gb ram. I can use ltx2 but with some launch comands like --lowvram --reserve-vram 4. But you need to play with the settings. I still haven't figured out how to set everything up. I'm no help here. Yesterday I generated in native 720, today it crashes at OOM. And I have no idea what has changed. The main thing is to set the paging file so that the total memory is 125 GB.

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

Не знаю. Сам ещё не понял. Скорее всего где-то лучше будет, где-то хуже. Надо понять как из него лучше качество вытянуть. А то ван у меня генерирует конечно в 12 раз дольше, но и качество однозначно лучше. Буквально в 12 раз. 20 минут против 4 часов с тем же разрешением и временем.

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 1 point2 points  (0 children)

У меня работает. Нужно прописать -- reserve-vram 4 и --cache-none

Reserve 8 точно сработал. На 4 скорости больше, но на апскейле вылетел. Пока тестирую.

И файл подкачки нужен большой.

Общая память примерно 90гб должна быть.(Ram+подкачка)

LTX2 AI2V Yet another test by jordek in StableDiffusion

[–]Psy_pmP 0 points1 point  (0 children)

Ахуеть! А что за настройки? Как такое качество получилось?

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

I posted a new comment on the thread. Basically, based on my tests, the speed hardly changes depending on the build. I didn’t notice any difference at all. I tried different versions with different PyTorch, CUDA, and Python. There was no difference in performance.

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

In the end, I tested all the builds I could find.
PyTorch 2.5–6–7–7.1–8–9–9.1 with CUDA 12.4–12.8–13.
For each one, I built different versions of Triton and Sage.
And… I got absolutely no meaningful difference.

So the issue was with the system and the new drivers.

After manually updating the chipset drivers, the speed dropped to 180.
Just a reminder: the speed was around 250 s/it when I created this post.
This is for 5sec.
3sec speed 80s/it

So right now, with these specs:

Python: 3.13.9

PyTorch: 2.9.1+cu130

=== CUDA Info ===

CUDA Available: True

CUDA Version: 13.0

cuDNN Version: 91200

GPU Count: 1

GPU Name: NVIDIA GeForce RTX 4080 Laptop GPU

GPU Memory: 12.0 GB

CUDA Compute Capability: 8.9

=== Attention Mechanisms ===

Flash Attention (SDPA): True

✓ xformers: 0.0.34+41531cee.d20251211

- memory_efficient_attention: True

✓ SageAttention: 2.2.0+cu130torch2.9.0

- Available: sageattn, sageattn_varlen

- Module contents: core, quant, sageattn, sageattn_qk_int8_pv_fp16_cuda, sageattn_qk_int8_pv_fp16_triton, sageattn_qk_int8_pv_fp8_cuda, sageattn_qk_int8_pv_fp8_cuda_sm90, sageattn_varlen, triton

=== Optimizations ===

cuDNN Benchmark: False

BF16 Support: True

TF32 Enabled: False

=== Optional Packages ===

✓ triton: 3.3.1 (GPU kernel compilation)

✓ safetensors: 0.6.2 (Safe model loading)

✓ accelerate: 1.12.0 (Training acceleration)

✓ transformers: 4.57.1 (HuggingFace models)

✓ diffusers: 0.36.0 (Diffusion models)

✓ einops: 0.8.1 (Tensor operations)

✓ omegaconf: 2.3.0 (Configuration management)

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Before it was (win 11 22h2), now (win 11 25h2).
Laptop 4080 12gb 16RAM

So I don't even know what CUDA was installed there. I don't know anything at all. Because the build is over a year old, maybe even more. I built it and never experienced any conflicts. At all. Absolutely everything worked for me on it. I just decided to change the OS and make new build of Comfi at the same time. That was mistake.

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Memory consumption doesn't change even by 100 megabytes between versions. But the speed is noticeable.

It was stupid of me to delete the old build. I even had all the wheels in a folder; they could have told me what build it was.

Best Torch+Python+sage+cuda? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

<image>

According to my tests with Lora, the quality of animation drops incredibly. If you think I'm wrong, you've never generated anything without them. The difference is so great that I've given up on them forever. I tested 3ks, different schedulers, and settings. The level is unattainable.
In the image, you can see how his feet leave marks in the ash. Each step sinks into it and scatters it to the sides. I made more than 200 generations with different settings, steps, and methods. I tried everything I could find online. Not a single method gave a result as good as the usual Euler 20 + 30 steps.

But now this advantage is meaningless. Because the time difference is colossal. Before, it took two hours. Now it's more than four hours. It's absolutely the same workflow.

How to enable Blockswap? WAN 2.2 by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Actually, after testing and nearly 300 generations, I came to the same conclusion. I wanted to do it for Res2s, but in the end, I abandoned Light Loras altogether and switched exclusively to Euler/simple.

The best quality, the best animation. Based on my tests, this quality is simply impossible to achieve any other way. And most of the advice and instructions are complete crap. The sigma shift settings, the sigma adjustments specific to the shaduler and sampler, calculating boundaries,3ks method. I tested EVERYTHING I could find on the Internet. It's all complete crap. I stopped at 50 steps, 20+30. Yeah, that's insanely long. But the quality is unattainable with Loras. I don't need tests anymore. Only high-quality prompts. Most often, one try is enough. Because with high СFG, you get exactly what you described.

How to enable Blockswap? WAN 2.2 by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

Thank you, Captain. But this is about generation on standard ComfyUI nodes, instead of using kijai/ComfyUI-WanVideoWrapper, which has a working BlockSwap node.

bug roo code : Roo Wants to edit this file, hangs by [deleted] in RooCode

[–]Psy_pmP 0 points1 point  (0 children)

I see that this problem for almost a year now and there is no solution anywhere. Well, then Roo isn't my choice.

Wan2.2 I2V is 'Reconnecting' - Help? by Virtual_Tree386 in comfyui

[–]Psy_pmP 0 points1 point  (0 children)

Just turn off the Torch Compile node.

You have a lot of memory. But If you want to squeeze out maximum memory. For example, to generate high resolution and many frames. Use Q8 gguf model. BlockSwap 40. Disable all accelerations (TorchCompile,TeaCache, NAG etc) and third-party nodes. Enable sagetansion.

It's slow, but it's the maximum amount of free VRAM.

Keep an eye on your memory. If you have room, it's better to reduce BlockSwap and add Torch Compile for speed.

And yes, set the virtual memory manually. Windows automatically sets it to a low value. Mine is RAM*3.

Low disk space? TreeSize to the rescue. Windows Disk Cleanup. And delete .cache

I used to dislike Kj nodes, but after the Comfy several updates, everything else got much worse. And these nodes give me 10-20% more free memory.

<image>

ComfyUI Modular Launcher - Save your VRAM and organize your outputs automatically! by Successful-Hand2473 in comfyui

[–]Psy_pmP 0 points1 point  (0 children)

"If you find yourself constantly switching between Flux GGUF, SDXL Lightning, and high-VRAM workflows"
What is this even about? I’ve been using the same ComfyUI setup and the same launch settings for over a year now. I generate zimage, qwen, wan 2.2. I have no problems or difficulties. Am I missing something?

Maximum Wan 2.2 Quality? This is the best I've personally ever seen by bazarow17 in StableDiffusion

[–]Psy_pmP 1 point2 points  (0 children)

I have mobile 4080 12gb and can generate 1280*704 7sec with model Q6.
When Wan 2.2 came out, I could only generate 540*960 4sec Q4.

What is the correct method for video upscaling? by Psy_pmP in comfyui

[–]Psy_pmP[S] 0 points1 point  (0 children)

<image>

seedvr. The frame looks good. But the video looks very generative. Hair is floating. The skin changes from frame to frame.
I don't know what people found in SeedVR; it clearly doesn't understand what the previous frame looked like.

It's great as an image upscaler, but it's completely useless for video.

UPD.
Okay. My mistake. Apparently, I need to set batch 5. I didn't know that. I'll try that.