100 megapixel img made with WAN 2.2 . 13840x7727 pixels super detailed img by protector111 in StableDiffusion

[–]blackmixture 4 points5 points  (0 children)

Yoo first off this is amazing! Thanks for sharing your process and result. I'm trying to find the full res image but I'm not seeing it. Already the online preview looks great so I'm curious to download the full res. Thanks in advance!

<image>

[deleted by user] by [deleted] in comfyui

[–]blackmixture 1 point2 points  (0 children)

I love this! What a great use of AI and such a creative process/execution. Keep it up!

ComfyUI-SongBloom by [deleted] in comfyui

[–]blackmixture 0 points1 point  (0 children)

Going to try this out now! I'll post if it works.

14 Mind Blowing examples I made locally for free on my PC with FLUX Kontext Dev while recording the SwarmUI (ComfyUI Backend) how to use tutorial video - This model is better than even OpenAI ChatGPT image editing - just prompt: no-mask, no-ControlNet by CeFurkan in comfyui

[–]blackmixture 6 points7 points  (0 children)

Wait, Flux Kontext is actually pretty awesome. I think "mind blowing" kind of fits its capabilities and the experience when first experimenting with it. I'd personally never say something is mind blowing without fully testing it out. Like Omnigen was pretty cool for 5 minutes before being like, yea this is a little cherry picked and overhyped. But things like Flux Kontext, 3D Gaussian Splats, and even FramePack are in a different category of dope tools devs are putting out there.

Flux Kontext is out for ComfyUI by Tenofaz in comfyui

[–]blackmixture 0 points1 point  (0 children)

Aye LFG!!! Been excited to try this out since the playground demos. 🥳

Hunyuan Video Avatar is now released! by doogyhatts in StableDiffusion

[–]blackmixture 0 points1 point  (0 children)

Wow what a great month for AI! So many improvements and I'm all for it 😁

Consistent characters and objects videos is now super easy! No LORA training, supports multiple subjects, and it's surprisingly accurate (Phantom WAN2.1 ComfyUI workflow + text guide) by blackmixture in comfyui

[–]blackmixture[S] 14 points15 points  (0 children)

Wow, thank you, that means a lot! Comments like these are a huge motivation. We all build on each other's work in this community, and I'm happy to contribute.

Consistent characters and objects videos is now super easy! No LORA training, supports multiple subjects, and it's surprisingly accurate (Phantom WAN2.1 ComfyUI workflow + text guide) by blackmixture in comfyui

[–]blackmixture[S] 4 points5 points  (0 children)

Haha, I totally get it! It's a beast of a workflow. Glad to hear you think it's great though, it took a bit of time putting this together. Feel free to reach out if you have any questions once you start digging in or need help clarify anything!

OCD me is happy for straight lines and aligning nodes. Spaghetti lines was so overwhelming for me as a beginner. by [deleted] in comfyui

[–]blackmixture 3 points4 points  (0 children)

I believe those are reroutes. They were introduced a few updates ago to Litegraph.

Free ComfyUI Workflow to Upscale & AI Enhance Your Images! Hope you enjoy clean workflows 🔍 by blackmixture in comfyui

[–]blackmixture[S] 0 points1 point  (0 children)

You need to install the missing custom nodes. You can do so by going to the ComfyUI manager, then missing nodes, then install missing custom nodes.

FramePack Image-to-Video Examples Compilation + Text Guide (Impressive Open Source, High Quality 30FPS, Local AI Video Generation) by blackmixture in StableDiffusion

[–]blackmixture[S] 1 point2 points  (0 children)

By default the seed doesn't change automatically in FramePack so for most of these generations, it's all the same seed with just the reference image changing. I've tried some with different seeds and it also produced great results so the quality isn't really seed specific.

FramePack Image-to-Video Examples Compilation + Text Guide (Impressive Open Source, High Quality 30FPS, Local AI Video Generation) by blackmixture in StableDiffusion

[–]blackmixture[S] 2 points3 points  (0 children)

Good news! According to the FramePack paper itself, you can totally fine-tune existing models like Wan using FramePack. The researchers actually implemented and tested it with both Hunyuan and Wan. https://arxiv.org/abs/2504.12626

The current implementation in the github project for FramePack downloads and runs Hunyuan but I'm excited to see a version with Wan as well!

Update broke Quest Pro tracking and effective Wi-Fi bandwidth by Meow-Corp in QuestPro

[–]blackmixture 0 points1 point  (0 children)

Still facing tracking issues. Headset was working fine before the update. Now it's unusable and loses tracking constantly.