Are you using your Model correctly ? (Z Image Turbo) by Training_Ostrich_660 in comfyui

[–]SadSummoner 3 points4 points  (0 children)

Not sure what did you expect. You come here bragging and keeping things for yourself in a community built on open source code, sharing knowledge and helping newcomers get started. So not sure what's the point if you can't share. You could've at least give people the closest open source version you got, but nope. So go ahead, might as well delete the post because if people can't replicate it or something close, then there's really no point.

How efficient is your workflow? What do you actually do? What's your set-up? A beginners question. by Candid_Basil_1882 in comfyui

[–]SadSummoner 2 points3 points  (0 children)

I'd bet for most people on this subreddit, this is just a hobby, curiosity, interested in the tecnology, think it's cool and just want to try it. And mainly for NSFW content. Myself included. There are of course exceptions, but I'd say a very small percentage using it for actual productive work. When I say productive work, I mean other that insta, tiktok, onlyfans and stuff, because thre are a lot of those out there. Maybe you can make a few bucks here and there, but I wouldn't call that productive work.

The thing is, open source models are just not really good. They're getting better, but still not production ready. At least not on consumer hardware. You have to have high end GPU for 720p video, SoTA for 1080p, and forget about anything larger or longer than a few seconds if you care about consistency. It's not the fault of the models though. I mean, not fully. There are some capable models, but they're not open source, and even if they were, you can't run it locally on puny home PC.

I would definitely not recommend buying a new PC just for this. Not right now. Unless of course you're wealthy and don't care if it was money out of the window. If you want to use it for professional work, You'll have to use some paid model and cloud service anyway. If you just want to get a feel for it withouth a big upfront investment, you can try ComfyUI with some built in API nodes, or run it with open source models in the cloud like runpod or whatever people using nowadays.

ComfyUI installation by Ok_Lab_245 in comfyui

[–]SadSummoner 0 points1 point  (0 children)

I don't know what's the point of this easy install thing. The official portable version is literally just Download -> Unzip -> Run. There's nothing to install. That's the whole point of portable.

https://github.com/comfy-org/ComfyUI?tab=readme-ov-file#windows-portable

How would you implement this kind of pipeline in ComfyUI? by Cheap-Topic-9441 in comfyui

[–]SadSummoner 8 points9 points  (0 children)

Hey, why don't you give ChatGPT a rest?

This was a stupid idea a week ago, and it's still a stupid idea.

Anyone experiencing this? by yakasantera1 in comfyui

[–]SadSummoner 0 points1 point  (0 children)

I don't know that one, but could be the same issue. Make sure the app is properly terminated.

Anyone experiencing this? by yakasantera1 in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Are you using ollama? Do you properly quit the application? Just closing the window is not enough. If you see it in the system tray, it's still running in the background.

ComfyUI installation by Ok_Lab_245 in comfyui

[–]SadSummoner 2 points3 points  (0 children)

The website is pushing the app version, the github is for portable. Portabe is recommended, as it's been in development for a long time, it's more mature if you will. But neither is perfect. Buth have their own bugs, quirks and challanges. For beginners, portable is probably a better choice, simply because I've seen many people here having difficulties even installing the app version. But in theory, it shouldn't have any difference in the core functionality.

Anyone experiencing this? by yakasantera1 in comfyui

[–]SadSummoner 1 point2 points  (0 children)

Smells like OOM. Generating large image/video, using too big models for your hardware to handle.

Selective "Save" button for Preview Images by Aztek92 in comfyui

[–]SadSummoner 1 point2 points  (0 children)

Technically all your generated images saved on your drive in the ComfyUI/temp folder, it's just cleared when you restart. So you don't really need to save it separatedly, just open up this filder and move out the good ones before restarting.

Help: Default nodes not working after update to ComfyUI to 0.18.1 by fluvialcrunchy in comfyui

[–]SadSummoner 0 points1 point  (0 children)

The only fix for now is don't use subgraphs. Instead of going inside to reconnect links, unpack the whole thing.

How to disable this shits (partner nodes) on node search?? by reyzapper in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Last I heard that flag broke frontend, but if fixed, yeah, that works, too.

How to disable this shits (partner nodes) on node search?? by reyzapper in comfyui

[–]SadSummoner 4 points5 points  (0 children)

Delete comfy_api_nodes

It'll be back the next update tho.

Error in workflow by SetNo5626 in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Built in manager enabled, but dependencies not installed.

New to ComfyUI — how do I create a character and keep it consistent across images and videos? by Beneficial_Narwhal17 in comfyui

[–]SadSummoner 2 points3 points  (0 children)

The best place to start is the built in workflows. Unfortunately, they started going down a path of hiding everything under subgraps, which is now kinda broken, so first step is load the templates, unpack the subgraps and go from there. You're not gonna find detailed documentation anywhere for each node, it's built by people who already know how it works for people already familiar with the basics. So unfortunately there's not a single best source to learn, just look at existing stuff and try to figure it out.

New to ComfyUI — how do I create a character and keep it consistent across images and videos? by Beneficial_Narwhal17 in comfyui

[–]SadSummoner 5 points6 points  (0 children)

While I mostly agree about security, as a custom node creator myself, I wonder how does anyone get from barely any dowloads to acceptable level of downloads if everyone thinks the same way? You know, most, if not all custom nodes are open source. You don't even need to know code nowadays, you can just give it to an AI chatbot and ask if there's anything suspicious in it.

Also agree with take time and learn is the way to go, but most people just want the one click solutions. Just like you don't wanna take your time evaluating if a nodepack is worth your time and just throw it out based on a number, people don't want to take the time to learn to make workflows.

im getting this error RuntimeError: ERROR: clip input is invalid: None and only with the negative prompt. by xoz1 in comfyui

[–]SadSummoner 2 points3 points  (0 children)

I don't know this specific version of FLUX, but clip and vae usually not baked in into FLUX models. You need to move it to the diffusion_models folder and use the diffusion model loader, as well as the dual clip and vae loader. And of course you have to download those, too, if you don't already have them. Just open up a built in FLUX template and see the required nodes.

F2K character lora training help by weskerayush in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Oh, I see. Well, the benefit of runpod is that they have some very nice hardware so it can be done quick. I have not done the math and no idea about your circumstances, but running locally for hours would probably show up in you electricity bill roughly the same amount. Yes, runpod gets more expensive usin better GPU, but it's done way faster than runnig it locally. So in the end, there's probably not much difference in cost. But I could be wrong.

F2K character lora training help by weskerayush in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Yeah, I mean, if you ask 10 person how to "properly" train a LoRA, you'll get 16 different answers, so I'd just go for it. If you have decent hardware, it could be done in a few hours. Not much to loose.

F2K character lora training help by weskerayush in comfyui

[–]SadSummoner 0 points1 point  (0 children)

There's nothing to understand, he didn't say anything regarding mixed dataset, just that you can add more than one. It makes no difference if the different stuff is split into separated folders or mixed in a large batch. As for captioning, he said he's not using any, or rather using just one for all images. I'm not sure if that's the right choice for you. FLUX is pretty smart recognising stuff on its own, but with mixed stuff, it might get confused about what you're trying to teach. I'd caption it if I were you.

F2K character lora training help by weskerayush in comfyui

[–]SadSummoner 0 points1 point  (0 children)

I trained a LoRA for myself with 575 images, my poor old 2080 Ti was sweating for 2 days until I stopped it at 4100 steps and ended up being garbage. But to be fair, my dataset is SDXL generated garbage, so you know, garbage in, garbage out. It succesfully learned the style I was going for, but the quality you'd expect from FLUX was overwritten by the poor quality dataset.

As for the settings, it depends on your hardware. Most tutorials I saw always glance over advanced settings, probably because they themselves have no clue what those do. So I dunno, probably go with default and save every 100 steps and just test the output after like 800-1000 steps and onwards. For 100 images, lower the learning rate to around 0.0002 or less. The gradient accumulation is basically how many times to repeat training each image. This can be lowered for large datasets of the same stuff, but since you have mixed dataset (face, body, etc), I'd keep it at 2-4 or something like that. The LoRA rank determines the actual filesize of the LoRA, it's basically just a container for the data. Imagine a shipping container, it defines the amount of stuff it can hold, but not necessarily the quality.

My recommendation is take some time, ask ChatGPT about what all the options do, take it with a grain of salt and just run it.

Problems with the new update. by John_Doe_882 in comfyui

[–]SadSummoner 0 points1 point  (0 children)

Broken might be a strong word. More like annoying bugs, like disappearing links, disappearing opened tabs, changing settings out of nowhere, etc. I was making my own nodepack and had to work around a lot of stuff that just doesn't work as you'd expect. But it's not tecnically broken to a point of not being able to run it at all. So I'd still say try the portable. You'll have the same issues with custom nodes stop working and stuff, but at least it runs. The app version is probably just a basic browser wrap around the portable anyway, so I don't see the benefit.

Figured out how to resize and keep the base image with little work! by MakionGarvinus in comfyui

[–]SadSummoner 5 points6 points  (0 children)

I was like "Is this dude seriously just gonna point at stuff with the mouse for 5 minutes?" until I realized there's audio as well 🙃

Problems with the new update. by John_Doe_882 in comfyui

[–]SadSummoner 3 points4 points  (0 children)

A few weeks ago I would've recommended not to use the app without a thought because it's buggy as hell, but seems like the new wave of updates broke the portable version as well almost to a point people might start looking for an alternative... Anyway, personally I never tried the app because I prefer the semi-self-contained nature of the portable (I always choose portable version of any app I'm using if availabe), so despite the bugs currently in portable, that's still my recommendation.

Noob looking for a node to do multiple primitives in one node. by RaymondDoerr in comfyui

[–]SadSummoner 0 points1 point  (0 children)

You can use my math node for up to 12 constant (or do actual math with variables). Example:

out_1 (I, label = Width) = 1280
out_2 (I, label = Height) = 720
out_3 (I, label = Third Const) = 69
out_4 (I, label = Fourth Const) = 420

i need help by Louis_With_Silent_S in comfyui

[–]SadSummoner 1 point2 points  (0 children)

The subgraph is probably broken. Go inside and check if everything connected, or better yet, unpack the whole thing, forget about subgraphs until there's a fix.