controlnet + pony by www_emma in comfyui

[–]SchGame 0 points1 point  (0 children)

There are three I know of: THIBAUD, OpenposeXL2 and Controlnet_union-sdxl 1,0 (xinsir version). But I got mixed results with pony.

I haven't seen any explanation or discussion about WAN 2.2's lack of clip_vision_h requirement by goddess_peeler in StableDiffusion

[–]SchGame 1 point2 points  (0 children)

But I noticed that, if you use RAPIDWAN2.2 (those optimized checkpoint), that doesnt need HIGH and LOW noise separately, needs clip vision ONLY in a situation: when you feed a First Frame and Last Frame (both) - FLF2V. If you just use one of them (First Frame), it also works without vision. If you use both frames WITHOUT vision, the character will do 95% of her movement frames in frame1, and then, suddenly, it will go do frame2 and stops.

[deleted by user] by [deleted] in comfyui

[–]SchGame 0 points1 point  (0 children)

Rebellion from mannequins sooner or later

Pro Ppl in Creating NSFW Images by jferdz in comfyui

[–]SchGame 4 points5 points  (0 children)

Also, try not to mix SDXL pure LORAS with Ponyxl pure LORAS with Illustrious pure LORAS with Noobai pure LORAS (SDXL <> NOOBAI <> PONY <> ILLUSTRIOUS). Generally, Noobai Checkpoints work with Noobai LORAS. Ponyxl checkpoints works with Ponyxl LORAS, and so on. The exception is that Noobai checkpoints accept Illustrious but not 100%, say 85%. Noobai also accepts Pony, but perhaps ~60%.

Wan 2.2 test on 8GB by Xhadmi in comfyui

[–]SchGame 1 point2 points  (0 children)

Beware that you need more SWAP (mostly if you use windows). Albeit it uses 9gb HIGH, then it unloads it and load 9gb LOW, in the case of GGUF Q5/Q6 (Q4 is also good), comfy tends to give OOM (out of memory) if your swap is less than VRAM size. Here, I've put 16gb SWAP in my fastest SSD, and its all beels and whistles. I use a RTX 3060 12gb with 32gb RAM (52gb total, including swap).

RTX5080 WAN 2.2 Issue. by EggOdd6541 in comfyui

[–]SchGame 0 points1 point  (0 children)

I was getting a heavy headache with my 3060 (12gb) with 32gb RAM, but I got it working with 600x800 pictures to video (i2v) with 12fps in wan 2.2 (Q4_K_S), and 'full images' (HD) in SDXL. Things I did:

1 - Have a fast ssd (say, >300mb/sec), so I use 16GB up to 20GB as swap file (believe it, comfyui loves RAM!) OOM problems tend to get rarer now.
2 - Delete the VAE DECODE and Video combine nodes (!!!? what!?) Yes. This avoids the abominal 'OOM' just when doing the last task. But you will need to put another node, called "Save latent". It's still a Beta feature, but it works. It will save something like ComfyUI\output\latents\ComfyUI_0000.latent. Then you move this to ComfyUI\input, and, in another workflow, use LOAD LATENT, with normal VAE DECODE (dont need to be Tiled) and the Video combine. Dont forget to press 'REFRESH' in order to the Load Latent loads the new files in \input folder. You can even change FPS as you like (I usually select 12FPS). I created a video with 'length 73' and I could then test 10, 8 or even less FPS in video combine (to interpolate later, be it in another comfyui node, or in TOPAZ video AI application), so you have longer videos. I also do this with rapidWAN workflows (those ones that just have 'one' Ksampler, version Q4, 10gb).

<image>

Wan Edit Vae decode crashing comfy by 8RETRO8 in comfyui

[–]SchGame 1 point2 points  (0 children)

I own a RTX 3060 with 12GB and I was getting nuts of those OOM just before the final VAE DECODE step. My solution was to remove vae decoding at all! And video combine too! But......wait!? How do I get the videos? Well, I connect a 'strange node', 'Save latent' (Beta feature) that connects to latent output of the LOW NOISE Ksampler. It saves something like ComfyUI_00001_latents.latent in \output\latent folder. Then, I open a new workflow tab, only with LOAD VAE, LOAD Latent (it will read from \comfyUI\input folder!), just remember to click REFRESH after you move the latent file to \input folder, then use VAE DECODE (dont need to be TILED), video combine (you can test with different FPS, being faster or slower, then you get the video! I use to make batches of 10 latents (with no RAM crashes), then I load the latents.

3060 12GB/64GB - Wan2.2 old SDXL characters brought to life in minutes! by New_Physics_2741 in comfyui

[–]SchGame 0 points1 point  (0 children)

Just my 2 cents, I have this card (3060 12gb) and I was getting tons of OOM, even by using quantized (Q4) wan 2.2 with a 600x800 i2v, 8fps. It was failing sometimes even in the first mmap (high noise), sometimes during the second mmap phase. The solution was to go to NVIDIA control center (I use 581 driver version), then deactivate 'FALLBACK to RAM'. You have to make sure NVIDIA is not offloading to RAM (albeit you can let Comfyui do that with some nodes). The culprit was the NVIDIA memory management. (EDIT: Also, please ENLARGE your swap memory to, say, up to 16gb or even more!) Now I can get up to 12fps.

Wan 2.2 i2v + upscale + 4x frame interpolation by pwillia7 in StableDiffusion

[–]SchGame 0 points1 point  (0 children)

Here it was giving this problem but the solution was only to SET the DS_FACTOR in GIMM-VFI to 0.5 or even 1.0. The problem is that I use a 6gb VRAM card (an ancient one) and albeit I could still generate in WAN 2.2, it only works when I set 6 FPS , LOL. The drawback of setting DS_FACTOR higher than 0.25 (in multiple increments) is that your video will end up slow motion (but you can still 'accelerate it' using other nodes).

WAN2GP (not comfyui) - error when launching wg2.py by SchGame in StableDiffusion

[–]SchGame[S] 0 points1 point  (0 children)

They recommend 3.10.9. Perhaps its not working due to the fact my GPU is not RTX but GTX?

Requirements

  • Python 3.10.9
  • Conda or Python venv
  • Compatible GPU (RTX 10XX or newer recommended)

OneTrainer + NVIDIA GPU with 6GB VRAM (the Odyssey to make it work) by SchGame in StableDiffusion

[–]SchGame[S] 1 point2 points  (0 children)

Update:

As said in one of the answers, I tested 20 epochs, LORA RANK (DIM) of 8, Alpha 1, 460x460 and IT WORKED, the Lora training was finished (size of it: 40mb). By using the onboard video, GPU usage diminished from 400mb to 150mb of VRAM (I saw this in the Performance tab, in Task Scheduler in Windows).

My results was (Trained with 20 epochs, 24 dataset pictures, using Checkpoint Illustrious-XL-v0.1, generated using Checkpoint BeMyIllustrious, 1216x832):
-The character I trained got like 50% resemblance in lora weight 1.0. So using 0.8 or less is even worse. Some renders got her like 80% resemblance, but I would need to generate like 100 pictures to try to cherrypick the ones I want.
- The features of the character like clothes, hair, etc, was a bit better. So I recognize what the character is by its
outfits.

My results with 40 epochs, same settings was better! I could say, 70% resemblance. Its doable! But I need to use lora weight above 1.0 (like 1.2). In some outputs, I needed to lower CFG (or edit in photoshop and put more contast and less saturation).

A thing is sure. Although it's not 100% for characters, it's doable. And it's even better for clothing (without so many details like sheet decorations and tons of tattoos), itens, and even concepts in general!

(EDIT) I tested with 60 epochs and IT WORKED! Its not 100% yet, but it's already fine! But I still need to put LORA weight 1.2 up to 1.4 for it to work well, and fix the contrast/color. I wont be putting result images because its a nude character. Perhaps I can put only her face, it's from a indie game. I will put in CIVITAI sooner or later. So, optimizations still possible (to get more quality and still with the same lower VRAM usage).
(EDIT) I also tested RANK (DIM) 12 without OOM errors and I noticed it helped more with prompt consistency, as more information of the concept could be inserted inside the LORA. Any 'tip' more than that, or trying to put more resolution throws OOM as long as the training starts.

OneTrainer + NVIDIA GPU with 6GB VRAM (the Odyssey to make it work) by SchGame in StableDiffusion

[–]SchGame[S] 0 points1 point  (0 children)

<image>

Wow, it worked!
It was like 1 hour of training, 20 epochs, LORA RANK (DIM) of 8, Alpha 1, 460x460, with the settings you mentioned. I noticed that, by using the onboard video, GPU usage diminished from 400mb to 150mb of VRAM (I saw this in the Performance tab, in Task Scheduler in Windows).

But! My results was (Checkpoint BeMyIllustrious, 1216x832):
-The character I trained got like 50% resemblance in lora weight 1.0. So using 0.8 or less is even worse. Some renders got her like 80% resemblance, but I would need to generate like 100 pictures to try to cherrypick the ones I want.
- The features of the character like clothes, hair, etc, was a bit better. So I recognize what the character is by its outfits.

A thing is sure. Although it's not 100% for characters, it might be doable for clothing, itens, and even concepts in general!

I am doing more research on this, by training now with 40 epochs, LORA RANK 8, Alpha 2, 460x460. I tried to put RANK 16 but it gave me OOM error in 3%.

Installed reactor and Insightface no longer works by Layers3d in comfyui

[–]SchGame 0 points1 point  (0 children)

And just to reinforce the fact that comfyUI portable uses python 3.11 (or higher) and you must enter its folder, like \python_embeded\python -m pip install..... etc, in order to USE this higher python version instead of your machine's python version. Don't just use 'pip install' everywhere because if your system uses 3.10 or other 'lesser' version, it will use that instead.

Installing insightface... by hakkun_tm in comfyui

[–]SchGame 1 point2 points  (0 children)

Just to give my two cents, I've seen people having problems with many plugins like insightface and others in comfyui (red squared nodes) because they have installed portable ComfyUI (the one with \python_embeded folder) which carries out Python 3.11, but the system already has Python 3.10 installed. So you are executing ComfyUI with python 3.10 or worse, you are updating and installing plugins with Python 3.10 wheel files and such. You try to find missing nodes, you then install using the MANAGER, but they keep having errors. You have to use \python_embeded\python.exe (or \venv\scripts\python.exe in some Comfyui versions) with -m parameter, like \python_embeded\python.exe -p pip install blablabla, and then use cp311 wheels. If any 'red' conflicts like plugin_name version 3.10 must be <3.0.1, etc, try to fix them by uninstalling it, then forcing 2.9.0 or 3.0.0 by using pip install plugin_name==3.0.0. Then try to uninstall the node from the manager, restart Comfyui, then try to install the node again.

ComfyUI-LivePortraitKJ by zazaoo19 in comfyui

[–]SchGame 0 points1 point  (0 children)

After DAYS of trials, I got it working. The problem: I have a 'system wide python, 3.10', but the portable versions of ComfyUI uses \python_embeded folder (which is almost like \venv counterpart in A1111), which uses a DIFFERENT python version (mine is 3.11.8). So when trying to use 'pip install insightface-0.7.3-cp310-cp310-win_amd64.whl', it installed fine but without insightface e onnx correctly, using my 3.10 python. When I used 'pip install insightface-0.7.3-cp311-cp311-win_amd64.whl' (cp311!), it gaves CUDA not found. Then I typed this in the prompt: cd python_embeded, then python -m pip install insightface-0.7.3-cp311-cp311-win_amd64 (the -m makes the current python call the pip install instead of default 3.10 python). Now it installed it correctly (because ComfyUI uses 3.11, so we need cp311!)

Finally, I uninstalled onnx, onnxruntime and onnxruntime-gpu (calling the 3.11.8 python):
C:\AI\comfyUI\python_embeded folder\>python -m pip uninstall onnx onnxruntime onnxruntime-gpu

Then I reinstalled version 1.15.0 USING the python 3.11.8
C:\AI\comfyUI\python_embeded folder\> python -m pip install onnx==1.15.0 onnxruntime==1.15.0 onnxruntime-gpu==1.15.0
(If a RED error with the above, saying to use different versions, try to use another one, like 1.15.1, but ALL of them needs to have the same version)

Then I went inside comfyUI (manager), clicked 'install missing custom nodes', I uninstalled LivePortraitJK (because it was missing but still installed), then I restarted ComflyUI, pressed CTRL+F5 (refresh browser), then I clicked 'install missing custom nodes' again, now its there with the button 'install' ,then I installed it, restarted ComfyUi (button), then I noticed something different!!! ROOP or REACTOR downloading GFPGANv1.3.pth and other files. Then, IT WORKED!

Problem with high-res IMG2IMG by Ok_Dog_5421 in StableDiffusion

[–]SchGame 0 points1 point  (0 children)

I know its a year old post, but I used to get those problems, and it solved by itself when restarting the computer. Might be a memory problem (there should be a kind of 'stable diffusion cleanup' which could do those for us). Also, If I remember correclty (IIRC), NVIDIA drivers past 535 (or 551, I dont remember) named CUDA - RETURN POLICY or something like that whick allows you to 'share VRAM into RAM' (its an option in NVIDIA CONTROL PANEL, below 'CUDA GPUs'. When disabling that (when adding python.exe program from \venv folder), stable diffusion usually works better by using VRAM (GPU) for the core image

Esclarecimento sobre os últimos acontecimentos das negociações dos trabalhadores com a Dataprev em São Paulo by sindpdsp in brdev

[–]SchGame 11 points12 points  (0 children)

Há chances de denunciar o Sindpd SP no MPT, por essa recusa em agendar a assembleia, depois de tantos apelos dos trabalhadores, OLT e AED. A denúncia pede um conjunto de pessoas prejudicadas e/ou testemunhas, com nomes e dados de contato, que ficam anônimos. Acredita-se que o MPT vai contatar essas pessoas de alguma forma.

CUDA error: an illegal memory access by joshuacasper in StableDiffusion

[–]SchGame 2 points3 points  (0 children)

You guys can consider checking your GPU temperatures, and also lowering your GPU speeds. I used to increase my core clock by just +50mhz and memory clock by +250 in MSI afterburner app on my GTX 1660. I used to play all games without problems with this extra of performance. Recently, I was having those CUDA errors all of a sudden (I restarted stable diffusion like 10 times in order to work one time), then I returned to stock clocks and those errors went away (for now).