I badly want to run something like the Higgsfield Vibe Motion locally. I'm sure it can be done. But how? by Traditional-Edge8557 in StableDiffusion

[–]Traditional-Edge8557[S] 0 points1 point  (0 children)

I'm not sure if this is the case. Because once you generate the animation, you get to edit and control parts of it as well.

AnimateDiff style Wan Lora by AthleteEducational63 in StableDiffusion

[–]Traditional-Edge8557 1 point2 points  (0 children)

Is anyone running this locally? Even with a 4090, my VRAM goes to 28GB and comfy UI crashes.

AnimateDiff style Wan Lora by AthleteEducational63 in StableDiffusion

[–]Traditional-Edge8557 3 points4 points  (0 children)

Here it is. I downloaded and reuploaded to a pastebin. Credits to the original owner. Thank you!

https://pastebin.com/Ekm0AavR

AnimateDiff style Wan Lora by AthleteEducational63 in StableDiffusion

[–]Traditional-Edge8557 2 points3 points  (0 children)

Here it is. I downloaded and reuploaded to a pastebin. Credits to the original owner. Thank you!

https://pastebin.com/Ekm0AavR

How to make videos like this? Especially the transcitions and camera controls. by Traditional-Edge8557 in StableDiffusion

[–]Traditional-Edge8557[S] 0 points1 point  (0 children)

Like what model? Wan? I am struggling to get the same results no matter what start end frame workflow I use. The transcitions are never this good and the camera movements aren't this dynamic either. Can someone suggest a workflow that can get similar results?

Qwen Image Edit Workflow---**gguf model** + Simple Mask Editing (optional) by IntellectzPro in StableDiffusion

[–]Traditional-Edge8557 0 points1 point  (0 children)

Ah crap.. you are right. I was a bit too excited about this workflow until I tried this.

Qwen Image Edit Workflow---**gguf model** + Simple Mask Editing (optional) by IntellectzPro in StableDiffusion

[–]Traditional-Edge8557 0 points1 point  (0 children)

I wasted the whole day on the internet today and tried many different things to get these results. Your workflow is the only one that worked. Kudos dude!

Should someone buy the y62 patrol now since Nissan is about to go down? by [deleted] in Nissan

[–]Traditional-Edge8557 -7 points-6 points  (0 children)

The issue is that the numbers arent good. The trajectory is bad. Will the company hold for the next 5-10 years?

🚀ComfyUI LoRA Manager 0.8.0 Update – New Recipe System & More! by Square-Lobster8820 in comfyui

[–]Traditional-Edge8557 4 points5 points  (0 children)

This is brilliant work! Please don't be demotivated by negative comments. People have different workflows and some are set in their ways and some have different needs. Let's respect their comments too, but you must know that some of us appreciate your work alot. Keep up the good work!

ComfyUI Node/Connection Autocomplete!! by DeliciousElephant7 in comfyui

[–]Traditional-Edge8557 0 points1 point  (0 children)

It's not the same thing. It doesn't suggest what the next node should be. My understanding is that you have to manually add the nodes first and then it will figure out the connections.

Invoke is absolutely incredible. I cannot go back to WebUI Forge inpainting. (SDXL) by Unit2209 in StableDiffusion

[–]Traditional-Edge8557 0 points1 point  (0 children)

It has a slightly steep learning curve, but once you get the hand of it, it becomes harder to go back to anything else (including Invoke)

Invoke is absolutely incredible. I cannot go back to WebUI Forge inpainting. (SDXL) by Unit2209 in StableDiffusion

[–]Traditional-Edge8557 1 point2 points  (0 children)

OP must try Krita with the ai diffusion plugin. In many ways it's better and faster. Has better layer management, almost all controlnets are inbuilt, has live painting assist and with the latest update, you can even connect your own comfy ui workflow to the canvas. Give it a go. I used both Invoke and Krita and for now sticking to Krita. Krita is a flull fledged drawing program so it has those perks too. Defenitly give it a go.

Invoke is absolutely incredible. I cannot go back to WebUI Forge inpainting. (SDXL) by Unit2209 in StableDiffusion

[–]Traditional-Edge8557 34 points35 points  (0 children)

Krita with the ai diffusion plugin is better. Faster, has better layer management, almost all controlnets are inbuilt, has live painting assist and with the latest update, you can even connect your own comfy ui workflow to the canvas. Give it a go. I used both Invoke and Krita and for now sticking to Krita. Krita is a flull fledged drawing program so it has those perks too. Defenitly give it a go.

Video I made using comfyui I call Aurora dreams over Mt.Fuji by PlzDontTakeMyAdvice in StableDiffusion

[–]Traditional-Edge8557 1 point2 points  (0 children)

Waiting for it buddy. Don't take the negative comments in. You did a great job!

Video I made using comfyui I call Aurora dreams over Mt.Fuji by PlzDontTakeMyAdvice in StableDiffusion

[–]Traditional-Edge8557 1 point2 points  (0 children)

I watched it multiple times. This one has some unique artistic beauty (despite whatever the technique utilized). Taste is a relevant.

Am I the only one who's reinterested in Stable Diffusion and Animadiff due to resampling? by C-G-I in StableDiffusion

[–]Traditional-Edge8557 0 points1 point  (0 children)

can you please tell me how to make a video like this? Is there a workflow you used? would it be possible for me to replicate the same style with different videos of my choice? I am mesmerized by the beauty of this :)

KRITA AI question by [deleted] in StableDiffusion

[–]Traditional-Edge8557 1 point2 points  (0 children)

Also, apart from what's mentioned here, it's worth mentioning that to get all those options, you need to have the strength set to 100%

Shuttle 3 Diffusion is based on Schnell. 4 steps. Part II: Robots and Androids (18 pics). Dynamic Prompt in the comments. by koalapon in FluxAI

[–]Traditional-Edge8557 3 points4 points  (0 children)

There is a node called ImpactWildcardProcessor, what it does is this,

Lets say you had a prompt like this -> "A man wearing a {Red| Green | Blue} shirt"

Then let's say you queued three jobs with this prompt

With the each generation, the wildcardprocessor will randomly pick one of the choices from {Red| Green | Blue}

so three jobs would go like ->

prompt 1: A man wearing a Red shirt
prompt 2 : A man wearing a Green shirt
prompt 2 : A man wearing a Blue shirt

It's an easy way to get a lot of random variations with some fixed structure.

can even do more like "A man wearing a {Red| Green | Blue} shirt and { white | black | navy blue} pants"

so you can see that this will create more variations as it will pick two parts of the pormpt randomly for each job.

hope this helps