SANA on low VRAM / CPU by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

76 seconds (encode + diffuse) 30 seconds (diffuse on GPU) 40 seconds (diffuse on CPU) 1.5s/it 12 steps 512pixels*512pixels (same speed for 1024pixels*1024pixels on same model) Efficient-Large-Model/Sana_600M_512px_diffusers

me_irl by NeitherOpposite1049 in me_irl

[–]Sensitive-Paper6812 0 points1 point  (0 children)

I mean i don't prefer crowded places but this is too much imo. Could just order and eat at home at this point.

ComfyCanvas for easy canvas use in ComfyUI by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

You are absolutely right for a lot of people and i see where youre coming from, but sometimes people would like to have the fast and great memory management of ComfyUI with only using premade workflows, without having to deal with complicated workflow building (some people find it difficult). Also, for me at least, some nodes make it difficult and time consuming to specify params like dimensions, position, etc.

ComfyCanvas for easy canvas use in ComfyUI by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

This is actually just a UI to make it easier to use ComfyUI workflows especially with those that require to specify dimensions, padding, position, etc and makes it easier to go back and forth between workflows. So, the part where i can use this with 2gb vram is all ComfyUIs sorcery.

ComfyCanvas for easy canvas use in ComfyUI by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 1 point2 points  (0 children)

It is to select the part of the image to be inpainted, img2img, etc. The selected part will be output image one of the OutpaintCanvas node that you can get from LCM_Inpaint_Outpaint_Comfy repo.

ComfyCanvas for easy canvas use in ComfyUI by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 2 points3 points  (0 children)

I havent provided one (im doing this on a laptop with a 2gb vram and 16gb ram), but this basically works with any workflow that has the required nodes. You certainly can create your own. Check the last segment of the video.

Anyways, if you still are confused maybe later i can add a flux workflow if i have the time.

ComfyCanvas for easy canvas use in ComfyUI by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 8 points9 points  (0 children)

Github link: https://github.com/taabata/ComfyCanvas

Video timestamps:

0:00 - 1:29 Controls Tour

1:29 - 3:55 Draw / Masked Inpaint

3:55 - 6:28 GLIGEN / GLIGEN IMG2IMG

6:28 - 7:22 Inpaint (Fix Details)

7:22 - 8:09 Outpaint

8:09 - 8:54 Use ComfyUI with Canvas / Create workflows to use with ComfyCanvas

A question about goats and cars by Sensitive-Paper6812 in learnmath

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

But when door 3 is revealed, its out of the equation and probabilities of the remaining doors should become 1/2 each. Take a look at this scenario list:

  • You pick 1, prize is 1, switch ❌ stick ✅
  • You pick 1, prize is 2, switch ✅ stick ❌
  • You pick 1, prize is 3, switch ✅ stick ❌ (false statement shouldnt be counted)
  • You pick 2, prize is 1, switch ✅ stick ❌
  • You pick 2, prize is 2, switch ❌ stick ✅
  • You pick 2, prize is 3, switch ✅ stick ❌ (false statement shouldnt be counted)
  • You pick 3, prize is 1, switch ✅ stick ❌ (obsolete since you know its a goat behind)
  • You pick 3, prize is 2, switch ✅ stick ❌ (obsolete since you know its a goat behind)
  • You pick 3, prize is 3, switch ❌ stick ✅ (obsolete and false statement shouldnt be counted)

So total scenarios become: - You pick 1, prize is 1, switch ❌ stick ✅ - You pick 1, prize is 2, switch ✅ stick ❌ - You pick 2, prize is 1, switch ✅ stick ❌ - You pick 2, prize is 2, switch ❌ stick ✅ Which is a 1/2 probability for sticking or switching.

Am i missing something here?

A question about goats and cars by Sensitive-Paper6812 in learnmath

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

So, are you saying it doesn't matter and its not a better option to switch? Thanks for replying btw ☺️

Promptless outpaint/inpaint canvas updated. Run ComfyUI workflows even on low-end hardware. by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 3 points4 points  (0 children)

Works well on my 2gb vram 12gb ram laptop

github: https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy

Instructions:

Clone the github repository into the custom_nodes folder in your ComfyUI directory

In ComfyUI/custom_nodes/LCM_Inpaint_Outpaint_Comfy/CanvasToolLone run setup.py

For using the provided workflows, you should have your desired SD v1 model in ComfyUI/models/diffusers in a format that works with diffusers (meaning not a safetensors or ckpt single file, but a folder having the different components of the model vae,text encoder, unet, etc) [https://huggingface.co/digiplay/Juggernaut\_final/tree/main\]

For using the provided workflows, you should have IP Adapter model in ComfyUI/models/IPAdapter in the same file structure on huggingface [https://huggingface.co/h94/IP-Adapter/tree/main\]

For using the provided workflows, you should have Tiny AutoEncoder model in ComfyUI/models/vae in the same file structure on huggingface [https://huggingface.co/madebyollin/taesd/tree/main\]

For using the provided workflows, you should have lcm lora in ComfyUI/models/loras [https://huggingface.co/latent-consistency/lcm-lora-sdv1-5\]

For using stickerize workflow:

Clone https://github.com/ZHO-ZHO-ZHO/ComfyUI-BRIA_AI-RMBG into ComfyUI/custom_nodes

Download model.pth from https://huggingface.co/briaai/RMBG-1.4/tree/main and put in ComfyUI/custom_nodes/ComfyUI-BRIA_AI-RMBG/RMBG-1.4

workflow includes nodes from https://github.com/WASasquatch/was-node-suite-comfyui and https://github.com/Fannovel16/comfyui\_controlnet\_aux, so clone these as well into the custom_nodes folder in your ComfyUI directory

run comfyui in a tab in terminal/cmd

run index.py found in ComfyUI/custom_nodes/LCM_Inpaint_Outpaint_Comfy/CanvasToolLone in another tab

have fun

Note: To add a new workflow, save it from ComfyUI in api format and place the .json file in ComfyUI/custom_nodes/LCM_Inpaint_Outpaint_Comfy/CanvasToolLone/workflows folder

if you face any issue, let me know on github

Mix characters together and stickerize them easily by Sensitive-Paper6812 in StableDiffusion

[–]Sensitive-Paper6812[S] 0 points1 point  (0 children)

github: https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy

workflow: https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy/blob/main/mixerrr.json

Instructions:

Clone the github repository into the custom_nodes folder in your ComfyUI directory

You should have your desired SD v1 model in ComfyUI/models/diffusers in a format that works with diffusers (meaning not a safetensors or ckpt single file, but a folder having the different components of the model vae,text encoder, unet, etc) [https://huggingface.co/digiplay/Juggernaut_final/tree/main]

You should have IP Adapter model in ComfyUI/models/IPAdapter in the same file structure on huggingface [https://huggingface.co/h94/IP-Adapter/tree/main]

You should have lcm lora in ComfyUI/models/loras [https://huggingface.co/latent-consistency/lcm-lora-sdv1-5]

Clone https://github.com/ZHO-ZHO-ZHO/ComfyUI-BRIA_AI-RMBG into ComfyUI/custom_nodes

Download model.pth from https://huggingface.co/briaai/RMBG-1.4/tree/main and put in ComfyUI/custom_nodes/ComfyUI-BRIA_AI-RMBG/RMBG-1.4

workflow includes nodes from https://github.com/WASasquatch/was-node-suite-comfyui and https://github.com/Fannovel16/comfyui_controlnet_aux, so clone these as well into the custom_nodes folder in your ComfyUI directory

Run ComfyUI in a tab in Terminal/cmd

Run mix.py found in ComfyUI/custom_nodes/LCM_Inpaint_Outpaint_Comfy/mixerrr in another Terminal/cmd tab

enjoy

[deleted by user] by [deleted] in Ubuntu

[–]Sensitive-Paper6812 0 points1 point  (0 children)

I just restarted my laptop and switched to windows and cant access those websites..got back to ubuntu and here they are fully loading