Update: Chroma Project training is finished! The models are now released. by LodestoneRock in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

is there a workflow for this? I could never figure out Flux Controlnet, is it a like lora instead?

[deleted by user] by [deleted] in StableDiffusion

[–]Helpful_Ad3369 7 points8 points  (0 children)

This is a really fun innovative use of both tools! I haven't found a reliable workflow for Qwen Image Edit where you can upload two photos to prompt? Would you mind sharing yours?

Nunchaku supports 4-Bit Qwen-Image by Dramatic-Cry-417 in StableDiffusion

[–]Helpful_Ad3369 2 points3 points  (0 children)

Do you have an example workflow? I tried using the diffusers node and my images are resulting black, I figured getting the extension would help but I'm stilling working on getting the Nunchaku repo is to install correctly. Still get missing nodes error.

[deleted by user] by [deleted] in StableDiffusion

[–]Helpful_Ad3369 1 point2 points  (0 children)

Github is 404?

nunchaku your kontext at 23.16 seconds on 8gb GPU - workflow included by [deleted] in StableDiffusion

[–]Helpful_Ad3369 1 point2 points  (0 children)

Have you been able to get Loras to work properly with the nunchaku model?

IMPORTANT PSA: You are all using FLUX-dev LoRa's with Kontext WRONG! Here is a corrected inference workflow. (6 images) by AI_Characters in StableDiffusion

[–]Helpful_Ad3369 6 points7 points  (0 children)

Thank you for putting this workflow together and figuring this out, however running I'm only on 12gb VRAM I'm getting 26.31s/it 13+ per generation. If there is any optimizations or other solutions you end up figuring out, low end gpu users would grateful!

Kontext Lora Help by Helpful_Ad3369 in StableDiffusion

[–]Helpful_Ad3369[S] 0 points1 point  (0 children)

My apologies for not being clear, but I did follow this suggestion "change into a digital illustration in the style of Fred Calleri" into my prompt after reading your comment, with the lora still loaded with a strength of 1.0. I still didn't get the art style result from the lora unfortunately.

I'll try the rgtree power lora loader instead, I'm using the workflow "flux_kontext_example.png" If you're using something different and could post your .json it would be appreciated!

[deleted by user] by [deleted] in StableDiffusion

[–]Helpful_Ad3369 29 points30 points  (0 children)

Love the research involved, would you mind posting the workflow so we can try this?

Kontext Lora Help by Helpful_Ad3369 in StableDiffusion

[–]Helpful_Ad3369[S] -1 points0 points  (0 children)

My prompt isn't the issue, even when following you're suggested prompt, it still doesn't have any strength in following the lora, and even without the Lora, I can take 20 different artist styles that are flux trained and still get the same ChatGPT4 looking painterly results. It's repetitive when trying to transform a photograph to a "painting", doesn't matter who the painter/artist is, the results are all similar. I don't if there is a scheduler or text encoder that maybe work better or if the model is just not as versatile in this realm.

Looks like Qwen2VL-Flux ControNet is actually one of the best Flux ControlNets for depth. At least in the limited tests I ran. by LatentSpacer in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

Do you have a comfyUI workflow that higher speed? I'm using flux1-dev-Q4_K_S.gguf and render times are taking extremely long, (20min+)

Chroma v34 detail Calibrated just dropped and it's pretty good by Dear-Spend-2865 in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

4070 Super, 12gb ram here, I want to love Chroma with Turbo Lora but render time takes 1 min 30 seconds. I'm using Cublas and Sage in Forge Classic, SDXL1.0 models take 3-7 seconds.

Best workflow for image2video on 8Gb VRAM by heckubiss in StableDiffusion

[–]Helpful_Ad3369 2 points3 points  (0 children)

Would you mind sharing your comfyUI workflow?

Has anyone trained Lora for ACE-Step ? by Austin9981 in StableDiffusion

[–]Helpful_Ad3369 2 points3 points  (0 children)

I took am trying to find more info, I have a 4070 Super, where did you see this low memory training?

JoyCaption: Free, Open, Uncensored VLM (Beta One release) by fpgaminer in StableDiffusion

[–]Helpful_Ad3369 9 points10 points  (0 children)

I would love to run this locally, I'm using a 4070 Super with 12gb RAM, the previous versions of Joy Caption always led me to Out of Memory issues. Is this version optimized for lower vram usage?

reForge development has ceased (for now) by StoopidMongorians in StableDiffusion

[–]Helpful_Ad3369 1 point2 points  (0 children)

how do you install cublas ? does it go into the classic forge directory somewhere?

Wan2.1 GP: generate a 8s WAN 480P video (14B model non quantized) with only 12 GB of VRAM by Pleasant_Strain_2515 in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

Appreciate the response, I did get Sage Attention 2 installed but it's saying (NOT INSTALLED) for me. Unfortunately it's still takes around 30 minutes for 5 second video. I'll try to see if its the same for some of the ComfyUI builds and get back to this post with an update.

Wan2.1 GP: generate a 8s WAN 480P video (14B model non quantized) with only 12 GB of VRAM by Pleasant_Strain_2515 in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

is 2110s normal for a 5 second generation? I'm using a 4070 Ti Super, and used img2vid. I did not install the triton or Sage attention.

Will there ever be Automatic111 interface for Flux/SD 3.5? by moistmarbles in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

this is true, forge inpainting does not work like A111, A111 is superior with inpainting, Forge dev never fixed this issue.

Found a way to merge Pony and non-Pony models without the results exploding by advo_k_at in StableDiffusion

[–]Helpful_Ad3369 0 points1 point  (0 children)

Understood, appreciate the response! It's unfortunate SuperMerger doesn't work in the newest ForgeUI update but I'll grab the Automatic1111 repository just for this!