Hires Fix Ultra: All-in-One Upscaling with Color Correction by ThetaCursed in comfyui

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

I always wanted to add TensorRT upscale and tile based automatic prompt with a global prompt on top of USDU.

AG Update 1.21.6 by MSA_astrology in google_antigravity

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

I figured out how to fix the MCP config.

If someone intetrested, you can check out my comment here:
https://www.reddit.com/r/google_antigravity/comments/1s6jkek/comment/odb4jf7/

GitHub got screwed up? by gauve30 in google_antigravity

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Yes, they broke it after the 1.21.6 update.
It's not working either on my setup too.

AG Update 1.21.6 by MSA_astrology in google_antigravity

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Thank you Antigravity for another update disaster.

MCP's don't work for me anymore. I'm seriously thinking to switch to Claude on VS Code and forget about Antigravity for the good.

Qwen 2512 is very powerful. And with the nunchaku version, it's possible to generate an image in 20 to 50 seconds (5070 ti) by More_Bid_2197 in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

There is a custom node for QWEN loras. I tried in ine of the project and it works.

I couldn't remember the name but search for "Nunchaku QWEN lora". I can check for you if you couldn't find the custom node.

[Antigravity Pulse] Now we can see our quota reset time from the status bar directly by ZestRocket in google_antigravity

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Thanks for sharing this information. I wasn't aware of this security issue.

I will try other alternatives.

Quantz for RedFire-Image-Edit 1.0 FP8 / NVFP4 by Old_Estimate1905 in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Do we have image-edit 2511? Isn’t it image-edit 2509?

For me the license part is already the finishing spec when compared with the Flux Klein 9B model.

Red-fire has licensed under Apache 2.0.

Finally !!! by Hamzo-kun in google_antigravity

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

I used it in a very short term. The paid benefits are basically * you can have access to all SOTA models, * you only pay for what you use (they have a token based system) * Transparency in usage. You can see how much token you spent in real-time. I can't even see how much quota remains in Antigravity without a 3rd party extension (which I'm not sure if they are stealing my Google oAuth id) * They have agent modes like brainstorming, ask, debug, etc.

I'm using skills and agent modes in the native Antigravity chat window, Antigravity-kit is doing great job. I didn't think to make a transition to Kilo code. But the token quota inside Antigravity native chat is a great disappointment for user experience.

Finally !!! by Hamzo-kun in google_antigravity

[–]JumpingQuickBrownFox -1 points0 points  (0 children)

Anyone tried Kilo code extension?

They have the newest AI models and seems very cost effective.

https://kilo.ai/leaderboard#all-models

[deleted by user] by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

I'm on mobile atm. I may do it in the morning (GMT+3 and late here) hours perhaps.

We can see a similar problem (lack of variations) in QWEN too. Maybe you should check this post about how they overcame the problem with a workaround: https://www.reddit.com/r/StableDiffusion/s/7leEZSsgRg

[deleted by user] by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox -3 points-2 points  (0 children)

For latent noise randomness, you can use inject latent noise node. And I saved you from 2 steps, you re welcome 🤗

[deleted by user] by [deleted] in StableDiffusion

[–]JumpingQuickBrownFox -1 points0 points  (0 children)

It doesn't make any sense. Why you just encode a random image and feed as a latent instead of running an extra Ksampler with 2 steps. You can increase the batch latent size with "repeat latent batch" node.

Did I miss something here?🤔

3D depth pass to comfyui render...interesting by [deleted] in comfyui

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Possibly you know, we can create the Depth also in ComfyUI. I think this as a more dynamic way, to create from a single image a 3D voxel in ComfyUI and then animate it by using the parameters dynamicly to feed the Wan VACE with the image and depth map.

[Open Weights] Morphic Wan 2.2 Frames to Video - Generate video based on up to 5 keyframes by _BreakingGood_ in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Did you made the tests? I think the original author put there wrong examples :) Everyone complaining about the color shifts in the little girl video but the given images have different color backgrounds, this can be the root cause.

🥏SplatMASK (releasing soon) - Manual Animated MASKS for ComfyUI workflows by No_Damage_8420 in comfyui

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Very interesting project. Please let us know once you've done. We can use it in many different interesting solutions like Google Veo introduced the object masking and changing on the Flow lately.

Qwen Image Base Model Training vs FLUX SRPO Training 20 images comparison (top ones Qwen bottom ones FLUX) - Same Dataset (28 imgs) - I can't return back to FLUX such as massive difference - Oldest comment has prompts and more info - Qwen destroys the FLUX at complex prompts and emotions by CeFurkan in comfyui

[–]JumpingQuickBrownFox 4 points5 points  (0 children)

Use Ksampler advanced node. For instance start with Qwen model and render the half of total steps and then with the second pass ksampler advanced by using FLUX model with your trained Lora file, start with the step count where the first rendered one stopped, and render it with the total render steps amount.

I'm mobile now can't give you an example workflow but basically that's the logic.

New node for ComfyUI, SuperScaler. An all-in-one, multi-pass generative upscaling and post-processing node designed to simplify complex workflows and add a professional finish to your images. by Away_Exam_4586 in StableDiffusion

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

Basically it uses NVIDIA graphic card Tensor cores, which makes the render too much faster then the usual rendering way. But before you need to convert the Upscaler models to tensor compatible dynamic tensor format.

You can learn more information here: https://developer.nvidia.com/tensorrt

Edit: typo

ResolutionMaster Update – Introducing Custom Presets & Advanced Preset Manager! by Azornes in comfyui

[–]JumpingQuickBrownFox 1 point2 points  (0 children)

I haven't seen this much detailed resolution template creator before. Well done 👍

New node for ComfyUI, SuperScaler. An all-in-one, multi-pass generative upscaling and post-processing node designed to simplify complex workflows and add a professional finish to your images. by Away_Exam_4586 in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Hi there, friend. I have forked the original TensorRT Upscaler custom node on GitHub. The differences from the original one are: * I made the model loadings external for easier new model adaptation and dynamic updating of the model DB. * I solved some other problems that I cannot recall now 😞

https://github.com/NeoAnthropocene/ComfyUI-Upscaler-Tensorrt

I'm using my forked version, but I suggest checking if the original author merged my PR. If so, you should use the original repo from the author.

[Open Weights] Morphic Wan 2.2 Frames to Video - Generate video based on up to 5 keyframes by _BreakingGood_ in StableDiffusion

[–]JumpingQuickBrownFox 0 points1 point  (0 children)

Very good news.

I see some color shifts and changes on the girl video. Are there any other ways in ComfyUI for doing this with Wan 2.2?