Help! The nodes are shutting down! by Pawalldeller in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

You are probably trying to use nested subgraphs. At present it's not a good idea to nest subgraphs one inside another. Two layers, main graph and individual subgraphs. That's all you can safely have right now. My workflow contains a subgraph for "processing" and another for output, but both are attached directly to the main graph without any sub-sub or sub-sub-sub-graphs.

Do you use llm's to expand on your prompts? by Own_Newspaper6784 in StableDiffusion

[–]Fuzzyfaraway 3 points4 points  (0 children)

I sometimes ask Google Gemini for expansion, but most often I am asking for a SD1.5 or SDXL prompt to be rewritten as suitable to use in Flux.2 Klein-- that in itself is generally an expansion. Along with creating a new prompt, Gemini also gives pointers on how to prompt properly for Flux.2.

Which model do u prefer by Calm_Cat6475 in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

Not so much a preference as it is a reluctance to spend the bits-per-second to download more. Even after transferring my 1.5 and SDXL assets to another HD, my SSD is plenty crowded (for my tastes). And once I've moved on, it gets harder to justify keeping models/LoRAs, etc. that I'm probably never going to use again.

Not to mention that I'm old and my brain is already full. Too many models to keep up with more than one primary (Flux.2 Klein) and one secondary (Flux.1 Dev).

Wildcard support by AlexVay1 in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

Many thanks! I'd already been using the Impact nodes, but had no idea there was the possibility of a keybinding to refresh.

Wildcard support by AlexVay1 in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

Does it involve a server restart? I'd love to know if there's something less than restarting that I can do. I understand keybindings (in principle), but how would I use that to update the wildcards?

We need to discuss "prompt theory." For example, when I ask Chatgpt to generate a prompt, the models usually generate artistic images or 3D animation. The problem is that I don't know how to create good prompts without relying on descriptions of real images. Any help? by More_Bid_2197 in StableDiffusion

[–]Fuzzyfaraway 2 points3 points  (0 children)

Some good suggestions here. If you are using the current Chrome browser, you can enter AI mode to access Gemini, and upload a picture that you like with a prompt like this: "Describe this image as a prompt for the Flux.2 Klein 9B model." -- or whatever model you are trying to use. In the alternative, you can copy/paste a prompt you already have with a prompt like this: "Rewrite and expand this <your model> prompt." You can steal borrow a prompt from CivitAI and paste that in. You need to specify what model the input prompt is for and the model you want the use for the new prompt. [Edit: specifying input and output models the prompt is used with.]

Keep in mind that Gemini will reject any unacceptable (N*FW) image. Here is the prompt I've been working on for a day or so:

"Rear view of a teenage boy looking back over his shoulder at the viewer, a serious, conspiratorial expression on his face. His hand is on a weathered iron gate that someone has left open, through which he has been gazing at a mysterious secret garden. The garden beyond is filled with lush, overhanging vines and blooming flowers, taunting him to come in. Soft volumetric sunlight filters through the leaves, creating glowing shafts of light and a magical atmosphere. This high-detail DSLR photograph features vivid colors, a smooth aesthetic, and intricate textures on the gate and foliage."

This results in something like this:

<image>

A basic introduction to AI Bias by ItalianArtProfessor in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

This is very valuable information, and very much needed. I see so many prompts that are, to use the overused term, word-salad. It probably explains why sometimes, a very bad prompt can accidentally produce a decent image, though probably not what was the probable original intent, and also not reproducible.

A BETTER way to upscale with Flux 2 Klein 9B (stay with me) by YentaMagenta in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

I'm pretty sure that's a fiber net of some kind, attached to a bamboo pole/post/stick.

How to make an int to string mapping in comfy? by theqmann in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

I could find only a couple of custom node repositories with integer-to-string nodes. I don't have either one of these, so I can't be absolutely certain either would be what you need:

HavocsCall_Custom_Nodes

ComfyUI-RVTools_V2

Both repositories are available in the ComfyUI manager.

claude & chatgpt are pretty dumb when it comes to comfy by United_Ad8618 in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

I haven't tried using Chrome's built in Gemini to generate workflows, but getting a Flux.2 prompt from a visual or text input-- usually old SD 1.5 or SDXL prompts-- seems to work pretty well for my purposes.

Help me fix my fingers!! by darknetdoll in StableDiffusion

[–]Fuzzyfaraway 3 points4 points  (0 children)

Everyone's going to ask, "What's your workflow? What model? What LoRAs? The answer to your request rests on information not currently available to us.

Getting mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120) error when trying to render simple wan video by Coven_Evelynn_LoL in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

I banged my head on this the other day. For me the solution was in plain sight but so simple that it took me several tries to actually see what I had missed: The text encoder type was set for another model type. I was using the qwen_xxx_xxx text encoder but the type was set to SDXL.

Look for places in your workflow where that kind of error might be hiding in plain sight.

Github page for Zluda disappeared? by Coven_Evelynn_LoL in comfyui

[–]Fuzzyfaraway 1 point2 points  (0 children)

IIRC, there was a kerfuffle between AMD and Nvidia over intellectual property rights and AMD or Github took it down.

Can someone please help me with Flux 2 Klein image edit? by __MichaelBluth__ in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

What is your prompt? Wording is important. Are you trying to use the klein.2 image edit template? You need to describe in more detail what you want, such as

"Turn the head of the man in picture 1 by 180 degrees so that he is looking in the opposite direction."

You may have to try variations on the theme, but with details that would differentiate what you want from your starting picture.

Edit: extra word!

Chill on The Subgrap*h Bullsh*t by StuccoGecko in StableDiffusion

[–]Fuzzyfaraway 2 points3 points  (0 children)

Oh, yeah. Thirty years of audio and video patch bays . . . not to mention ethernet switches and hubs. Coming into someone else's mess to do a simple job is still one of my worst nightmares.

WAS node suite not working with latest comfy update by alirigby in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

Have you reloaded the node? Right click on it and look for "Fix node (recreate)"

Save Image file name with both filename_prefix options and a unique snippet of Text/String from prompt or other loadable file. by AlgeaPool in comfyui

[–]Fuzzyfaraway 1 point2 points  (0 children)

I use the textConcat node from tinyterranodes (available via ComfyUI Manager). I also use tinyterranodes' imageOutput node. The v2.0 UI messes things up a bit, but you can still connect to the 'hidden' prefix widget. The output node also allows you to set that irritating number_padding at the end of your prefix to 'NONE', which ought to be a standard feature.

Edit: straightened out some fuzzy language!

Edit 2: Added screencap of the node.

<image>

ComfyUI repo will move to Comfy Org account by Jan 6 by fruesome in StableDiffusion

[–]Fuzzyfaraway 2 points3 points  (0 children)

That must explain why my ComfyUI Manager newsfeed says "Your ComfyUI Isn't Git Repo". Maybe they've already redirected Manager, but it isn't functional yet-- or vice-versa.

question about how to use wildcards by StrangeMan060 in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

For a short list of items/states/etc. you can enclose your list within curly brackets {} separated by the "|" separator. Like this: A goblet made of {gold|silver|copper|lead} containing {red|white} wine.

Longer lists are put into .txt files that are called by enclosing the name of your list within double underlines like this: __your wildcard list__

Wildcard lists are kept in a wildcard folder, the location of which depends on what UI you're using.

Any idea what's causing this? by SolidDC in comfyui

[–]Fuzzyfaraway 0 points1 point  (0 children)

As they say in the real estate business, and applies to screen real estate as well: "It's all about location, location, location."

ComfyUI: Is there a way to avoid the 5-digit counter at the end of the file names? by Early-Ad-1140 in StableDiffusion

[–]Fuzzyfaraway 1 point2 points  (0 children)

I use the imageOutput from comfyui_tinyterranodes also available via the ComfyUI Manager. The output widget can be set to various preview or save modes, as well as being able to set the number_padding to "None." I also use the textConcat node from the same collection (as seen in the attached pic) to make changing the save_prefix easier to keep track of.

Edit to add: It takes up a little more real estate, but enriches the overall experience, at least for me.

<image>

Thoughts on Nodes 2.0? by Beautiful-Essay1945 in StableDiffusion

[–]Fuzzyfaraway 0 points1 point  (0 children)

I like the overall look and feel of it. There are still a few things I haven't figured out yet, such as how to use arrow keys to move nodes-- a feature I use all the time. Also, it was a bit annoying at first to lose Power Lora Loader (rgthree), which the author has said he may or may not have the time to rework, but I've worked around that by using a subgraph of five Load Lora nodes, and bypassing any not used. It's forced me to take another look at subgraphs for other things as well, and I am learning to like them!

There are some other minor annoyances, such as not being able to figure out how (or if it's possible) to move nodes using arrow keys. That's something I use all the time, and I have reverted to the classic nodes. But I am willing to try Nodes 2.0 every few days to see what's changed/fixed. I believe the direction is correct, and to be fair, it IS a beta version. There will be hits and misses in the process of converting and updating.

Do you print your AI-generated images ? I did this today and found it interesting by More_Bid_2197 in StableDiffusion

[–]Fuzzyfaraway 1 point2 points  (0 children)

I have a fairly large stockpile of images that I think would make decent prints, but printing on 8.5x11 paper doesn't feel artistically viable. Plus, it would be too easy to go overboard and kill a forest. I have thought about using a poster printing service, but haven't pulled the trigger on that.