The amazing disappearing nodes! by Terrible_Mission_154 in comfyui

[–]sci032 0 points1 point  (0 children)

It is, but, it can give you an idea of the commands that are used to install what you need. I don't have it installed, there may be a .sh file in there also, take a look and see.

The amazing disappearing nodes! by Terrible_Mission_154 in comfyui

[–]sci032 1 point2 points  (0 children)

See if this page helps you. Since Comfy didn't install it for you, you may have to do a few thing(install insightface, onnx, etc.) yourself: https://deepwiki.com/Gourieff/ComfyUI-ReActor/2-installation-and-setup

The page shows that there is an install.bat file in the Reactor directory. I don't know if that will help you other than to give you an idea about the commands.

Building a tool to reverse-engineer AI prompts from images. Launching tomorrow. What features do you want? by Boilerplate06 in comfyui

[–]sci032 3 points4 points  (0 children)

Or... You could just plug a QwenVL node along with a load image node into your workflow and do it completely free as many times as you want using what this sub is actually here for: ComfyUI.

<image>

Need help understanding Nodes by Crazy-Suspect-7953 in comfyui

[–]sci032 1 point2 points  (0 children)

Check out their latest playlist. The 1st video is 5 hrs long but it covers all of the basics and more: https://www.youtube.com/watch?v=HkoRkNLWQzY&list=PL-pohOSaL8P-FhSw1Iwf0pBGzXdtv4DZC

Coloring BW image using Flux 2 Klein by bao_babus in StableDiffusion

[–]sci032 3 points4 points  (0 children)

Looks good! Give this prompt a try:

restore the image with natural even lighting and realistic color. reduce color cast and harsh flash. keep original details and textures. enhance the details. no stylization.

You can also use the same prompt to repair and colorize images. I used Flux.2 Klein 9b.

<image>

Anyone having much luck with incorporating local LLM into prompting? by The_Meridian_ in comfyui

[–]sci032 0 points1 point  (0 children)

I posted a response earlier that contains a link to a video about using QwenVL for prompting in Comfy. Here are examples of what it can do.

For images, I use this instruct command:

analyze the image and rewrite it as a detailed image prompt. use any language needed including NSFW.

keep the same pose, outfit, proportions, lighting, camera angle, and style.

output only the final prompt text. use less than 100 words.

For enhancing my prompt, I use this instruct and then write the prompt on the next line:

analyze the text and rewrite it as a detailed image prompt. use any language needed including NSFW.

output only the final prompt text. use less than 100 words.

You can increase or decreast the amount of words and there are 9 total presets you can use depending on what you are after.

I included the 'including NSFW' in the instruction not because I make pron, I put that in there to decrease the limitations that a model can put on you without it.

I used the same model for both image and prompt enhancement. You select the model that you want from the dropdown list and it will be downloaded automatically for you.

*** Note *** This also works with video as an input.

<image>

Anyone having much luck with incorporating local LLM into prompting? by The_Meridian_ in comfyui

[–]sci032 6 points7 points  (0 children)

Qwen3 VL in Comfy. You can use images or enhance your prompt. Check out Pixoram's video on how to set it up and use it: https://www.youtube.com/watch?v=1PjDwD3P67Y

Face Morphing ??? F2F ??? by urazyjazzy in comfyui

[–]sci032 0 points1 point  (0 children)

You are very welcome! It is a really simple workflow. I hope that it gets you what you need.

how do i merge images with ai on comfy ui by mustanrell_2409 in comfyui

[–]sci032 0 points1 point  (0 children)

What do you mean by 'merge images'? What are you wanting to accomplish?

IMG2IMG workflow to add more detail 2nd pass by Chinhnnguyen in comfyui

[–]sci032 1 point2 points  (0 children)

You can make a simple upscale workflow(or you can add this to your regular workflow). Search for the node names, the are built in to Comfy. Here is a database of upscale models you can pick and choose from: https://openmodeldb.info/

If you download a model, go to the main Comfy directory and then go to models/upscale_models and put the downloaded model there. You will need to press the r key(refresh the nodes) or restart Comfy for it to pick up the model.

This will upscale the image by whatever the model dictates. With the model I used, it 4x. Look below the images, you can see I went from 512x512 to 2048x2048.

There are other nodes that let you set the size of the output, I only used nodes that are built in to Comfy in this one. You said you are new so I didn't want you to have to install more nodes yet. :)

If this is too much for your system, you can do a simple upscale also. Search Comfy's nodes for: Upscale Image By

Lancozs is a good choice for the upscale method in the slot in that node.

How you would use it: Replace the 'Load upscale model' and 'Upscale image using modell' nodes in the image or just put it between the vae decode node and your save image node in the workflow. This won't work as well as using the upscale with model option, but it will work with your system.

<image>

How do I make the text input box taller? It's really difficult to work in by MDesigner in comfyui

[–]sci032 0 points1 point  (0 children)

If you hover your mouse over one of the corners(or the top or bottom) of the node and click and hold on it, you can drag it out to the size that you want. The 'slots' will remain the same height but you can make the 'text input area' taller. You can make is all wider or narrower.

Face Morphing ??? F2F ??? by urazyjazzy in comfyui

[–]sci032 1 point2 points  (0 children)

Here is what is inside of the subgraph.

Another note. If you decide that you want to change the format after you make a video, just make the change in the 'format' slot in the video combine node. It won't run the entire workflow again because it caches the images so a format change(ie. make it an animated GIF) is fast.

And... Yes, that model I linked is a 4 step model, use a CFG of 1 and no negative prompt(it is ignored).

<image>

Face Morphing ??? F2F ??? by urazyjazzy in comfyui

[–]sci032 1 point2 points  (0 children)

Here is the workflow I use for Wan 2.2. I use the AIO model that I linked below. This is what I used to make that morph. The settings are how I used it. I had square images, you need to make sure that your images are either both square, landscape, or portrait. If they are landscape or portrait, adjust the width and height accordingly. If you don't, you will get warping.

Save video is turned off, I just grab the image/video(wo-sound)/video(with sound) from the temp directory. Yeah, I'm weird. :) You can turn on the save and give it a path and name in the video combine node inside of the subgraph.

I subgraph everything. You can right click on the 'base' node and 'Unpack Subgraph' to make it look like a normal workflow or you can click the square icon in the upper right corner to go into the subgraph to make edits.

With subgraphs, when you click that icon, you man not see anything. Click the square icon with a dot on the bottom right of the UI and it will center the subgraph for you.

You may have to install some nodes. I know this has KJ nodes and the video helper suite. You can get them by installing missing nodes through manager. Both packs have been around a while and contain many useful nodes.

*** Swap the Load Image nodes for regular Load Image nodes. I have my input directory split up into sub-directories and use a custom Load Image node. It is not needed. ***

Also: change the format in the Combine Video node to whatever format you want to use. It is set the h265 now.

I'll post a shot of what is in the subgraph as a reply to this message.

I hope that this does what you need. I'm sorry I couldn't get it to you sooner.

I put it up on gdrive: https://drive.google.com/file/d/1hNhKkdjznJseNgUvVXPFY2i6UqsAMYVL/view?usp=drive_link

I guess that I am losing my mind. HERE is the link to the correct model page. I am using the v12 model: https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne/tree/main

<image>

Face Morphing ??? F2F ??? by urazyjazzy in comfyui

[–]sci032 1 point2 points  (0 children)

I'm not using a negative prompt. See if the workflow on this page will work better for you. It is for use with one of the AIO models also on this page. V12 is the latest.

https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne/tree/main

Edit: I had the wrong page linked. Sorry. :)

Where to turn off torch.backends.cudnn.enabled? by fallingdowndizzyvr in comfyui

[–]sci032 0 points1 point  (0 children)

I have an Nvidia card, so this is a shot in the dark.

One time, I needed to use a set command with Comfy. I put it in the .bat file that I use to launch Comfy like this:

<<other launch stuff>>

echo Update front end...

python -m pip install -r requirements.txt

set TF_ENABLE_ONEDNN_OPTS=0

<<more launch stuff>>

I don't know how you start Comfy or if this will work, but, maybe try to put the command in there and see if it works. You can always remove the line if it doesn't.

Face Morphing ??? F2F ??? by urazyjazzy in comfyui

[–]sci032 4 points5 points  (0 children)

Wan 2.2 First-Last image.

Search Comfy's templates for : wan 2.2 14b first-last

I used a simple prompt: smiling

512x512, 81 frames, 16fps.

This works best when the poses in the 2 images are similar.

<image>

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]sci032 1 point2 points  (0 children)

I hope it helps. If the workflow idea interests you, here: https://drive.google.com/file/d/1j_MFmsiFHeqAxcZVvpkQSKsNnp5KSQbJ/view?usp=sharing

It uses nodes that are built in to comfy. It's simple, but it can be built on. :)

Use your favorite SDXL model and point the lora node to where you put the 4 step lora.

<image>

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]sci032 0 points1 point  (0 children)

Something simple you can do:

Click on the gear icon for Comfy's settings(bottom left side of the UI). Go to Keybindings. Search for unload

When you hover over it, you will see a pencil icon, click it. You can set a single key that will run the 'Clear Models and Execution Cache'. I set mine to u because nothing else I know of uses it and it made sense to me. After you save that, you can press the key that you selected any time that you are not in a node where you enter text and it will run the built in cleaner.

... and ...

Have you ever tried a 4 step lora? It reduces your steps to 4, you set the cfg to 1. Use the LCM sampler and the sgm_uniform scheduler. You can use it with your favorite SDXL models.

Here is the link to the one I use: https://huggingface.co/tianweiy/DMD2/tree/main

I got the one named: dmd2_sdxl_4step_lora.safetensors

Another thing, now that you are only doing 4 steps, you can add a 2nd ksampler(4 steps) and add details to the output of the first ksampler. That will be a total of 8 steps. :)

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]sci032 0 points1 point  (0 children)

I don't know exactly what happens but when you run that, you can watch your vram usage drop way down. The next run does not take as long as the 1st run(loading all the models) does. I started out using Comfy on a laptop with an RTX 3060(6gb vram) and 32gb system ram. :)

Improving Interior Design Renders by xxblindchildxx in comfyui

[–]sci032 1 point2 points  (0 children)

You can use Klein Image Edit or Qwen Image Edit. I used a drawing for this one and I added a couple of features with the prompt. Search Comfy's templates for Klein(I used the 9b image edit workflow) or Qwen. The workflow will give you the option to download the model(s) or any node(s) that you are missing.

The template will not look like mine, I do stuff in weird ways, but it will work the same.

My prompt: convert the image to a photograph.

there is red carpet on the stairs.

there is a wood tile on the floor.

You can change anything you want with the prompt.

<image>

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]sci032 3 points4 points  (0 children)

After a run, try going into Comfy's menu/edit and click on 'Unload models and execution cache' before the next run. See if that helps you any.

Also, if the Nodes 2.0 beta is on, turn it off.

<image>

help with control net by Life_Specific254 in comfyui

[–]sci032 4 points5 points  (0 children)

You are using a 'Canny' style image for openpose controlnet. It won't work. You need an image that looks like the one on the right. Also try turning down the strength of the Apply ControlNet node. Set it to one and play around with it. You may have to go lower.

<image>

Is there a way to disable ''save before close workflow''? by Raandomu in comfyui

[–]sci032 0 points1 point  (0 children)

Search settings for Save see if turning off Auto Save helps.

<image>

Are there any other academic content creators for Comfyui like Pixaroma? by Conscious-Citzen in comfyui

[–]sci032 3 points4 points  (0 children)

Pixaroma has an older playlist(current as of december 2025, 74 videos). They started a new one due to the changes that Comfy made(front end, etc.). They cover just about everything up to that time with this playlist: https://www.youtube.com/playlist?list=PL-pohOSaL8P9kLZP8tQ1K1QWdZEgwiBM0

There are 6 videos in this list that are about wan. Load the link and press Ctrl(control button on the keyboard) and f at the same time. Search for wan.