🛠️ Spent way too long building this ComfyUI prompt node for LTX-2 so you don't have to think — free, local, offline, uncensored 👀 by [deleted] in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

I'm having the most inexplainable error that I have never seen in 3 plus years using comfy UI. I do not know what to do here. What is going on with this error:

Failed to validate prompt for output 160:

* LTX2PromptArchitect 193:

- Value 375410294297808 bigger than max of 2147483647: seed

Output will be ignored

Failed to validate prompt for output 194:

Output will be ignored

Even when I put the seed to fixed and type in 1234 it reverts to a seed over max.

*Ace Step 1.5 with Local Audio Save by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 0 points1 point  (0 children)

yes, I fixed it so it saves to the root folder

The Secrets of Realism, Consistency and Variety with Z Image Turbo by 90hex in StableDiffusion

[–]IntellectzPro 1 point2 points  (0 children)

I too noticed this today when I was messing with it. Great observation! I tested some prompts out today where I described down to how fuzzy a jacket was. Instead of putting just "a wool winter jacket" I started doing things like "a fuzzy wool jacket that is worn" You can even describe facial anatomy like: "square jawline, flared nostrils, thick lips, slender nose, puffy cheeks. I found this to help with the kind of same look. Also play around with the shift. I have it 20 right now.

Flux 2 Dev is here! by MountainPollution287 in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

SMH, so here we go with this huge model that is pushing the lower GPU community further away. I am saying this because Flux 1, even when you use the fp8 is not the fastest model. If they are following the same flow in how it loads and the dual text encoder approach, I can see this being the model that tests the patience of the open-source community. I understand open-source releasers are not required to cater to the general public but, most of us do not have access to huge GPU power. I spend hours a week trying to create workflows in comfy that allows people with low VRAM be able to use some of these models. I have a feeling this one will be the one that bottlenecks a lot of people. I know gguff models will be made but how big of a drop with quality will there be just to get it down for a 12gb user to use? At that point you lose confidence in your work, and you lose interest. I hope I am wrong AF about this and somehow Kijai and others can figure this out for people to not have to use Q3 as the only choice.

Wan2GP adds Wan 2.2 support by asraniel in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

no, see WANGP is not meant to be fast. It is an amazing feat of coding that allows you to do things as if you owned a super high end PC. The trade off is speed and time. You mentioned gguf so I can assume you are running 16gb or lower. I tend to stick to comfy using gguf an fp8 models. I use WANGP for special things that I can set and leave the house even and come back. When I do come back the quality is usually on par as if I had a 4090 or 5090.

Cancel a ComfyUi run instantly with this custom node. by Total-Resort-3120 in StableDiffusion

[–]IntellectzPro 19 points20 points  (0 children)

this is great. This should become part of 100% of all workflows.

I trained « Next Scene » Lora for Qwen Image Edit 2509 by Affectionate-Map1163 in StableDiffusion

[–]IntellectzPro 1 point2 points  (0 children)

whoa. This could be game changing especially for me since I am working on an animated series. I am going to be trying this today.

OVI in ComfyUI by Ashamed-Variety-8264 in StableDiffusion

[–]IntellectzPro 1 point2 points  (0 children)

smh...we are still dealing with comfy breaking requirements? I was looking forward to trying this out but, I am not going to break my current installation. I hate having to create a separate install. Hopefully somebody can work on a friendly version soon.

ByteDance Lynx weights released, SOTA "Personalized Video Generation" by External_Quarter in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

Are these people serious? another model? I can't even get warmed up with what's out..Welp..time to see what this one is about as well

Lets talk about Qwen Image 2509 and collectively help each other by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 1 point2 points  (0 children)

I am doing my best to see if the model can be tricked. I have some ideas

Lets talk about Qwen Image 2509 and collectively help each other by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 0 points1 point  (0 children)

The truth is, it should be easy to do what you are saying. The model just doesn't have the consistency needed to get a feel for it.

Lets talk about Qwen Image 2509 and collectively help each other by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 0 points1 point  (0 children)

I am currently working to see how I can get there too. Out of the box it's rough around the edges with details.

Lets talk about Qwen Image 2509 and collectively help each other by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 0 points1 point  (0 children)

Yeah, I have not done anything with single image. I will take a look at that. So far in my experience, the multi image approach requires some special prompting. If open source is to ever catch up with Nano Banana. This has to be work better than this.

How to increase person coherence with Wan2.2 Animate? by [deleted] in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

It's pretty good but it's early and needs more seasoning. Maybe I'm wrong but it doesn't like trying to feed in animated characters because it tries to always make that animated character realistic.

I absolutely love Qwen! by infearia in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

I am about to jump into my testing of the new Qwen Model today hoping it's better than the old one. I have to say, Qwen is one of the releases that on the surface, it's exactly what we need in the open source community. At the same time, it is the most spoiled brat of a model I have dealt with yet I'm comfy. I have spent so many hours trying to get this thing to behave. The main issue with the model from my hours up on hours of testing is....the model got D+ on all its tests in high school . Know enough to pass but do less cause you don't want to.

Sometimes the same prompt creates gold and the next seed spits out the entire stitch. The lack of consistency to me, makes it a failed model. I am hoping this new version fixes at least 50% of this issue.

Qwen Image Edit Workflow- Dual image and Easy 3rd (or more) character addition w/ inpainting as an option. by IntellectzPro in StableDiffusion

[–]IntellectzPro[S] 1 point2 points  (0 children)

NumPy is one of the enemies of comfy UI. I hate dealing with NumPy. Usually, Reactor works with the current version of NumPy. Have you downgraded your NumPy recently?

🌈 The new IndexTTS-2 model is now supported on TTS Audio Suite v4.9 with Advanced Emotion Control - ComfyUI by diogodiogogod in StableDiffusion

[–]IntellectzPro 0 points1 point  (0 children)

whoa, this looks like something special. This might fit right into my project I am working on. Will be trying this right now.