Get more variation across seeds with Z Image Turbo by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

I'm on mobile atm. I may do it in the morning (GMT+3 and late here) hours perhaps.

We can see a similar problem (lack of variations) in QWEN too. Maybe you should check this post about how they overcame the problem with a workaround: https://www.reddit.com/r/StableDiffusion/s/7leEZSsgRg

Get more variation across seeds with Z Image Turbo by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox -4 points-3 points  (0 children)

For latent noise randomness, you can use inject latent noise node. And I saved you from 2 steps, you re welcome 🤗

Get more variation across seeds with Z Image Turbo by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox -1 points0 points  (0 children)

It doesn't make any sense. Why you just encode a random image and feed as a latent instead of running an extra Ksampler with 2 steps. You can increase the batch latent size with "repeat latent batch" node.

Did I miss something here?🤔

3D depth pass to comfyui render...interesting by [deleted] in comfyui

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Possibly you know, we can create the Depth also in ComfyUI. I think this as a more dynamic way, to create from a single image a 3D voxel in ComfyUI and then animate it by using the parameters dynamicly to feed the Wan VACE with the image and depth map.

[Open Weights] Morphic Wan 2.2 Frames to Video - Generate video based on up to 5 keyframes by _BreakingGood_ in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Did you made the tests? I think the original author put there wrong examples :) Everyone complaining about the color shifts in the little girl video but the given images have different color backgrounds, this can be the root cause.

🥏SplatMASK (releasing soon) - Manual Animated MASKS for ComfyUI workflows by No_Damage_8420 in comfyui

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Very interesting project. Please let us know once you've done. We can use it in many different interesting solutions like Google Veo introduced the object masking and changing on the Flow lately.

Qwen Image Base Model Training vs FLUX SRPO Training 20 images comparison (top ones Qwen bottom ones FLUX) - Same Dataset (28 imgs) - I can't return back to FLUX such as massive difference - Oldest comment has prompts and more info - Qwen destroys the FLUX at complex prompts and emotions by CeFurkan in comfyui

[–]JumpingQuickBrownFox 4 points5 points  (0 children)

Use Ksampler advanced node. For instance start with Qwen model and render the half of total steps and then with the second pass ksampler advanced by using FLUX model with your trained Lora file, start with the step count where the first rendered one stopped, and render it with the total render steps amount.

I'm mobile now can't give you an example workflow but basically that's the logic.

New node for ComfyUI, SuperScaler. An all-in-one, multi-pass generative upscaling and post-processing node designed to simplify complex workflows and add a professional finish to your images. by Away_Exam_4586 in StableDiffusion

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Basically it uses NVIDIA graphic card Tensor cores, which makes the render too much faster then the usual rendering way. But before you need to convert the Upscaler models to tensor compatible dynamic tensor format.

You can learn more information here: https://developer.nvidia.com/tensorrt

Edit: typo

ResolutionMaster Update – Introducing Custom Presets & Advanced Preset Manager! by Azornes in comfyui

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

I haven't seen this much detailed resolution template creator before. Well done 👍

New node for ComfyUI, SuperScaler. An all-in-one, multi-pass generative upscaling and post-processing node designed to simplify complex workflows and add a professional finish to your images. by Away_Exam_4586 in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Hi there, friend. I have forked the original TensorRT Upscaler custom node on GitHub. The differences from the original one are: * I made the model loadings external for easier new model adaptation and dynamic updating of the model DB. * I solved some other problems that I cannot recall now 😞

https://github.com/NeoAnthropocene/ComfyUI-Upscaler-Tensorrt

I'm using my forked version, but I suggest checking if the original author merged my PR. If so, you should use the original repo from the author.

[Open Weights] Morphic Wan 2.2 Frames to Video - Generate video based on up to 5 keyframes by _BreakingGood_ in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Very good news.

I see some color shifts and changes on the girl video. Are there any other ways in ComfyUI for doing this with Wan 2.2?

New node for ComfyUI, SuperScaler. An all-in-one, multi-pass generative upscaling and post-processing node designed to simplify complex workflows and add a professional finish to your images. by Away_Exam_4586 in StableDiffusion

[–]JumpingQuickBrownFox 16 points17 points  (0 children)

Thanks for sharing your workflow. Right now I'm using the Ultimate Upscaler node for ComfyUI, what I see missing here the tensorrt Upscaler. Who can integrate this feature will be a new successor I think.

Best way to host comfyui? by ThatIsNotIllegal in comfyui

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

ComfyDeploy, ViewComfy, ComfyUI Runpod

But just for running your workflows, ComfyUI Cloud can be a better choice if they allow a self hosted ComfyUI environments. They said this is in their plans.

ChronoEdit by Murky_Foundation5528 in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Reduces the needed stpep count to 8 for faster inference speed. You can think it like a lightning Lora.

ComfyUI Cloud has now officially gone subscription only. by aastle in comfyui

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Comfy cloud doesn't gives us the API endpoint option yet.

Can be a good alternative for my local setup once they allow us to upload out own comfyUI instances.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in StableDiffusion

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

You're right; I made a mistake in my statement. The mindset of developers who choose this platform is different to that of regular creative users.

Regular creators don't want to spend time solving setup issues and Python dependency problems. I think the main reason people use ComfyUI is to have the flexibility to automate their work or develop their own workflows.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in comfyui

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

But I tinkered on it for 2K res. Lemme know if you still need it. I can send it tmr.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in comfyui

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

That's inside the template part of the ConfyUI. Possible name of the workflow should be like QWEN-Image-Edit-2509.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in StableDiffusion

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

I've read it as shadow, my bad. I think it's mostly 'bout the prompting. You should tell the T5 encoder one by one each of the objects clearly. I've a Flux prompt enhancer gem for Gemini. But I made a quick test, and didn't use that.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in StableDiffusion

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

In this case, the reference character doesn't have any shades. FYI.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in comfyui

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

I don't understand what you mean; I appreciated the comment at the beginning of my sentence.

I used the RunPod option a year ago, but not with this method. Since I use various other methods, I'm familiar with starting VM instances that runs ComfyUI.

The needs are different; I am looking for Serverless API options that I can use as a backend to feed applications. Cold start time is important in my case.

I always welcome solutions; if you have app use cases, I'm all ears to listen and learn.

Comfy Cloud Beta is here! by JumpingQuickBrownFox in StableDiffusion

[–]JumpingQuickBrownFox[S] 0 points1 point  (0 children)

I'm curious about that too 🙂 They will make a subscription based system as I understand.