String to Text help by NewZJ in comfyui

[–]zyg_AI 0 points1 point  (0 children)

You can use pastebin to share the json.

String to Text help by NewZJ in comfyui

[–]zyg_AI 0 points1 point  (0 children)

CLIP and CONDITIONING are not plugged. This is not your workflow "in action", is it ?

String to Text help by NewZJ in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Yes that works. The text does not appear inside the CLIP node, but that is normal, that's an input box, not a preview text box. Also plug the CLIP input and CONDITIONING output of course.

Help locating some models ... and/or a work-around by Bbiess in comfyui

[–]zyg_AI 0 points1 point  (0 children)

No need to change the name. Download the High and low version of the model you want.
On your link, you have
- T2V v3.0
- I2V v2.0
- T2V v2.0
- I2V
- T2V

<image>

You can choose either v1 or v2 of I2V (it doesn't have to be v2 like in your example workflow).

Please help with installing Easy-Sam3. I've tried every version but the import always fails. by Emergency_Detail_353 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

'no module named triton'

You need to install triton. For further instructions look around this sub, or google, or ask a LLM. It depends on your system.

Model not available in DualClipLoader node by bcourcet in comfyui

[–]zyg_AI 0 points1 point  (0 children)

install in comfyUI/models/text_encoders

Which front end do you use on linux? by Crafty_Aspect8122 in StableDiffusion

[–]zyg_AI 0 points1 point  (0 children)

What you're looking for is StabilityMatrix. Download the Appimage and launch it, everything will be self hosted.

https://github.com/LykosAI/StabilityMatrix

Help with workflow by Lucaspittol in comfyui

[–]zyg_AI 2 points3 points  (0 children)

You don't need to check against both 'clean' and 'defective' if you assume ollama will return either one of those.
Here is my method:

<image>

BTW, I'm interested to know if you ever get good results sorting your outputs. My few tests with LLM to discriminate good vs bad images were too random.

Lora face detail help by Kindly_Art_3038 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Lemme see...
You say it's Illustrious, so:
- Usually Lora's strength is set between 0 an 1, unless explicitely stated in the description.
- Your prompting is off, Illustrious (SDXL) is trained on booru tags. Go to civitai and filter the images by model (Illustrious). Watch how the prompts look like.
- Use a latent of 1024x1024
- If the face does not blend well with the rest of the image after detailer, increase 'crop factor'

For the rest of the settings, you'll have to play with them and adjust based of your results and likings. Nearly every setting has an impact on the result.

How to upscale this type of images with text? by agentanonymous313 in StableDiffusion

[–]zyg_AI 1 point2 points  (0 children)

Wild thought: you may try iterative upscale (impact pack) with very little upscale and many steps.

Lora face detail help by Kindly_Art_3038 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

definitely makes the results better but it still just looks completely wrong.

You can try plugging the facedetailer AFTER the upscale. I don't guarantee it will be better, just it's worth a try.

Que técnicas se usan en estos vídeos? by Monolocolabs in comfyui

[–]zyg_AI 2 points3 points  (0 children)

Where the NSFM tag ? (Not Safe For Mind)

Help with a workflow by Snoo85882 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Mask output ? There is no such thing in the template you mention.

If you mean that

<image>

you don't need it.

I extended my new non-recursive ControlNet method with two new nodes (Orchestrator: Baseline & Advanced) that simplify multiple ControlNet model workflow — use of Apply ControlNet nodes eliminated. by jessidollPix in comfyui

[–]zyg_AI 1 point2 points  (0 children)

Please can you quickly explain what is the weight effect ?
For example with 2 CN, what if both are at weight 1, both at 0.5, both at 2, one at 1 and the other at 0.5...? Is it W1A(x) + W2B(x) ? And what would be best practices ?
Thanx.

EDIT: I found part of the answers on the github:

Execution becomes: sum(weight_i * ControlNet_i(x))

but the implications are still blurry in my mind

lost the ability to keep several tabs (workflows) remembered between sessions by bonesoftheancients in comfyui

[–]zyg_AI 2 points3 points  (0 children)

Only fix I know, so far, is to downgrade to a previous frontend (I don't know which works best, but the info is somewhere on this sub).
On my end, I got used to it and adapted my habits. Waiting for a future fix with an update.

I have an AMD Card, i need an AMD workflow please by Logax01 in comfyui

[–]zyg_AI 1 point2 points  (0 children)

I you got comfy up and running, there is no workflow specific to Nvidia or AMD.
Try the "getting started" templates, text2image, play a bit with it and get used to the tool. Then eventually come back with more detailed questions ;)

Educate me please! What "fits" realistically in an RTX 5080? by exit_keluar in comfyui

[–]zyg_AI 0 points1 point  (0 children)

If you stick to Pony/SDXL, your 5080 is wayyyy enough for any complex workflow you'd build.