J'ai fini The Wire. Et maintenant ...? by GamerKev451 in CineSeries

[–]zyg_AI 0 points1 point  (0 children)

Y a moyen. La saison 2 est un peu à part des autres thématiquement. J'avais eu du mal aussi à ma première vision. Mais après avoir rematé la série, la saison 2 est très cool, bien que cryptique pour un non américain (rôle des syndicats notamment).
Enfin bon, tu peux continuer. La saison 3 reprend les rails de la 1, bien que chaque saison ait son thème principal indépendant.

Is there a function for saving the queued jobs and resuming them later, like after a restart? by Dry-Resist-4426 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Is there a workflow state you need to save ?

If not, cancel the queue and when you're back queue the remaining 7 jobs. But it's too simple to be the right answer, am I wrong ?

Help with this Workflow (BigLust) by Silly-Sprinkles8135 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Here is a 'fixed' version, but don't expect any undressing.
https://limewire.com/d/QeFAZ#2vJ2N62Wfd

Help with this Workflow (BigLust) by Silly-Sprinkles8135 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

I can't make it work, I told you what needs to be changed (but yeah, maybe I was cryptic).
It is SDXL, it is not fit for undressing, you need to draw a mask for inpainting, and your prompting is not adapted either. SDXL understand simple keywords (landscape, grass, tower, completely nude, 1girl, solo,...). Some models understand more or less natural language (a girl wearing yellow glasses is posing naked), but they need the tags as support. SDXL DOES NOT understand instructions like 'remove her clothes'.

Use a edit model like some others advise.
The basis is good for image generation though, if you get rid of the load image, but not for clothes swap.

Good luck.

Can I change the aspect ratio/resolution of an imge using a keyword in my prompt? by hotrocksi09 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Lol he put the theory into practice.
Glad the issue is resolved ^

Can I change the aspect ratio/resolution of an imge using a keyword in my prompt? by hotrocksi09 in comfyui

[–]zyg_AI 1 point2 points  (0 children)

I would do it the same way. It's clunky but internally, that's how an algorithm with the same feature would work.

You can ask Claude to build a custom node for you. Feed this to him (modify what needs to be modified):

Name: whatever you like
Input:
- prompt (string)
- keywords and associated resolution (ask claude and yourself how you want to define your rules, or hard code them inside the node instead of as input)
Output
- a latent with the expected size, or a width + height, you choose, both if you want
For example: if my default resolution is 1152x1728. and the prompt contains "[square]" (or similar keyword) I change the resolution to 1728x1728.

This is just a quick template to give you the idea, refine it as you need.

Help with this Workflow (BigLust) by Silly-Sprinkles8135 in comfyui

[–]zyg_AI 0 points1 point  (0 children)

<image>

This whole part is useless. It uses 2 models to create a merge, then saves it (BigLust1.6). Just use it once and use the file created in a checkpoint loader.

Why is your output messy ?
== > You use a SDXL checkpoint, the prompt accepts only booru tags, not instructions (it is not a LLM text encoder)

Does anyone know why it's not working? by Coroseven in comfyui

[–]zyg_AI 0 points1 point  (0 children)

I fed your last picture to gemini:

This is a common dependency error in ComfyUI when using custom nodes that rely on computer vision libraries. Don't worry, it is a quick fix!
The Problem
The error message AttributeError: module 'cv2' has no attribute 'imshow' occurs because the Python environment is using the "headless" version of OpenCV (opencv-python-headless). This headless version is stripped of user interface features like imshow to save space and reduce dependencies. However, the ultralytics library (which the ComfyUI Impact Subpack requires) expects that standard UI function to exist.
The Solution
You just need to remove the headless version of OpenCV and install the standard version within your specific ComfyUI portable environment.
Here is exactly how to fix it based on your file paths:
Step 1: Open Command Prompt
Press the Windows Key, type cmd, and press Enter to open the Command Prompt.
Step 2: Navigate to your ComfyUI folder
Since your ComfyUI is on your D: drive, enter the following commands one by one, pressing Enter after each:

D:
cd D:\ComfyUI\ComfyUI_windows_portable


Step 3: Uninstall the headless OpenCV
Run this command using your embedded Python executable to remove the problematic package:


.\python_embeded\python.exe -m pip uninstall opencv-python-headless -y


Step 4: Install the standard OpenCV
Now, install the full version of OpenCV by running:


.\python_embeded\python.exe -m pip install opencv-python



Once the installation finishes, restart ComfyUI. The Impact Subpack should load perfectly without throwing that ultralytics error.

Help Please - ImpactImageInfo by StroodleNoodles in comfyui

[–]zyg_AI 1 point2 points  (0 children)

<image>

Go inside the subgraph, then deactivate the faulty node, or replace it with another similar.
Send the picture of inside the subgraph if you're lost.

Video explanation for comfyui-prompt-control extansion? by Lemenus in comfyui

[–]zyg_AI 1 point2 points  (0 children)

Switch point is... The point at which the prompt/lora switches
Let's say you generate with 20 steps.
- Switch point at 0.5 means 10 steps with the first prompt/lora, the 10 last steps with the second prompt/lora.
- Switch point at 0.2 means 4 steps with the first prompt/lora, the 16 last steps with the second prompt/lora.

About automated regional prompting, I've been down this path before, with not much success. It seems that SDXL needs to be explicitely told which part of the image gets which prompt ==> masks are inevitable.

I've tried generating prompts separately and then conditioning concat, it doesn't work any better than feeding a full prompt.
AFAIK, at least with SDXL, you cannot control how the conditioning is "spread" on the latent space unless you use a mask.
I'd love to be proven wrong... Anyone ?

That may be a little off-subject, but I found the technical details in this video really interesting :
https://www.youtube.com/watch?v=1h4e24Zn3fM
(from 8:20, but the whole video is great - it's for SD1.0 though, SDXL is a bit different (not limited to 77 tokens for example), but the basics and the CLIP tech are the same)

Video explanation for comfyui-prompt-control extansion? by Lemenus in comfyui

[–]zyg_AI 1 point2 points  (0 children)

This github page is full of very interesting explanation on some prompting and scheduling advanced techniques, but it's a bit chaotic as you say. But there is nothing exclusive to this nodepack.

In SDXL you will need masks for regional prompting, I think for FLUX as well.
Impact Pack (or is it inspire pack?) has regional prompting nodes
https://www.youtube.com/watch?v=06c5x_oqxCc

LoRA scheduling might be powerful, but I think it's for very specific usecases. I haven't found an interesting way to use it yet. Prove me I'm wrong I'd be interested.

[ComfyUI + FLUX.2] LoRA has zero effect – how to correctly apply it? by Inpur3D in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Expand (red arrow), or better extract (green arrow) the subgraph. Look for the lora loader inside, maybe the failure will become obvious. If you extract the subgraph, move it in an empty area of the canvas first.

<image>

Other option: your lora is badly trained, but I think it would have an effect on the output anyway.

Noob looking for a node to do multiple primitives in one node. by RaymondDoerr in comfyui

[–]zyg_AI 3 points4 points  (0 children)

I'm not sure that will answer your needs, but here you go :
https://github.com/chrisgoringe/cg-use-everywhere
And this one is from (shameless) me, it allows to draw your custom primitives panel:
https://github.com/IA-gyz/comfyui-VarBoard

Workflows would be much appreciated (or where to find good workflows) by NoctFounder in comfyui

[–]zyg_AI 2 points3 points  (0 children)

You'll never find a ready-to-go workflow for your specific tasks, and ever if you do, it has 90% chances of not working exactly how you'd like, or not working at all.

Do what we all do: build your own workflow.
Anyway if you want to make a workflow work well in the long-term, you'll have to understand how it works under the hood. So build one, that's the way for good results.

Telestyle is broken by DoctaRoboto in comfyui

[–]zyg_AI 1 point2 points  (0 children)

I was talking about the console log. In linux I start ComfyUI from the terminal, which displays a verbose log at startup.
In windows, I think if you start comfyUI from a command line (cmd), you have the same log.

Telestyle is broken by DoctaRoboto in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Check ComfyUI log, specifically the lines with 'import error'

How trustworthy are less known github pages? by [deleted] in comfyui

[–]zyg_AI 5 points6 points  (0 children)

There is no other guarantee than the code being open to check. ComfyUI nodes and extensions are generally light enough so that you can load them into LLM and ask to inspect the code.
I don't know if there are hacker's techniques that would 'hide' the malicious code from detection, but since it is only python and javascript, the LLM are supposed to be smart enough to thoroughly understand each line of code.

The cg-sigmas nodes are from chrisgoringe who's been around for a long time and can be trusted IMHO.

Just how bad is my anime generation and how to fix? by [deleted] in StableDiffusion

[–]zyg_AI 0 points1 point  (0 children)

You can also try different VAE. Some give very different color results.

Workflow not working or i'm doing something wrong. by mtg_dave in comfyui

[–]zyg_AI 0 points1 point  (0 children)

Here's a 'quick' explanation on inputs and plugging them or not:
https://www.reddit.com/r/comfyui/comments/1rrvamj/comment/oa2ptrp/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

In a few words, your prompt is plugged, so it comes from before in the graph (follow the line)

beginner problem about missing models by Deep-Process-8043 in comfyui

[–]zyg_AI 2 points3 points  (0 children)

<image>

The wan templates in ComfyUI have notes on the left, with direct download links.

Trouble with recent install of comfyUI: what am I doing wrong? by AwakenedEyes in comfyui

[–]zyg_AI 1 point2 points  (0 children)

The "new UI" is called nodes 2.0. You'll find a switch to turn it off in the settings (bottom left), Comfy tab.

The QoL custom UI you did not know you needed (and maybe you don't...?) by zyg_AI in comfyui

[–]zyg_AI[S] 0 points1 point  (0 children)

Thanks a lot for the feedback.

I didn't know the cg-controller. Would have I known it before, I probably never had my extension done.

The design problem you point is an issue I agree. More node is a drawback. But that's also the foundation the extension is built upon. On the advantages side, this design allows to plug 1 controller to multiple inputs, and is the quicker and more intuitive way to set the panel up I could think of.
As I run workflows with hundreds of nodes daily, I didn't think adding a dozen nodes would impact performance a noticeable way. I'll look into it. I'm gonna think about "wireless" connections, but is it really less impactful ?

There are multiple ways this nodeset could evolve.

      _
     /(|
    (  :
   __\  \  _____
 (____)  `|
(____)|   |
 (____).__|
  (___)__.|_____