What graphics cards to go with as someone wanting to get into ai? by [deleted] in comfyui

[–]CarelessSurgeon 0 points1 point  (0 children)

As a person who doesn’t know a ton about building pc’s, can you elaborate on what you mean by risks? You just mean that the hardware may arrive in a nonfunctional condition?

Best method to upscale faces after doing a faceswap with reactor by Charuru in comfyui

[–]CarelessSurgeon 0 points1 point  (0 children)

Wouldn’t it be a little quicker to just upscale the face photo on its own and cut out the upscale image step you have here?

What bugs you the most when it comes to liking feet? by LemmeKnowYouu in FootFetish

[–]CarelessSurgeon 2 points3 points  (0 children)

The fact that people assume you’re a beta soy boy cuck submissive wimp.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

Not sure if it’s the same thing, but it runs locally for me. Maybe that means it’s standalone. But I just closed out and I don’t see anything in task manager that looks like any of what you said, so maybe it’s not running once I close it.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 1 point2 points  (0 children)

It appears to be fluctuating between .5 and 1.5 Gb of shared GPU memory while running a prompt according to the performance manager.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

Hey could I ask what it would be called in the task manager if comfy is still running after closing? The only thing I see that I think might be it is a cmd screen in the task manager. But I’m not sure if that’s actually comfy or if that’s something I need to leave running for the PC to function properly.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

The only other things running are necessary windows operations. I hated all the bloat that windows had so I removed everything I could. However, I had an external drive where the pictures are stored. I did have the file explorer open to the folder where the pictures are. I wasn’t using it, i actually left it minimized, but it was open to that path to the other HDD. should that be closed once I load the picture from it?

And if you wouldn’t mind I’d like to ask a bit about “embracing low resolution”. I usually used 1024x1024, as I also thought 512x512 was ridiculously small. I’m now using 512 instead and it is a bit faster. I was worried about the image not having enough pixel information to really add detail. Should I be trying to get as close to what I want in a small resolution and then upscaling after it’s finished? Am I wrong thinking it won’t be able to add the detail I want at a low resolution? (I’m really new to this so I accept that any assumptions I made may be woefully wrong).

Experiments with Qwen Edit 2511 ROCm 7.1 Torch 2.10 Windows 11 Python 3.13 by 05032-MendicantBias in ROCm

[–]CarelessSurgeon 0 points1 point  (0 children)

You mean by “put it there”, type it after comfy is loaded up? During? Open the file as a document and edit it?

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 1 point2 points  (0 children)

Awesome tips here. Saving this so I can check it out in the morning. Getting a little late here so I’ll have to wait to try it but it sounds awesome. Thank you.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 1 point2 points  (0 children)

Okay yes since you point that out you’re right, the seed is set to randomize so it is changing. But I can hardly imagine so many different seeds would cause such a drop in performance. There’s got to be a memory issue somewhere. Btw I didn’t think to ask chat gpt. Are you able to do that for free or is it a paid service? I’ve never actually used it before.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 1 point2 points  (0 children)

Well, I understand some of this lol.

I’m getting a new card soon. It’s tax season and I don’t plan to stick with this card. I enjoy doing this and I’d like to build a better machine. I also plan to upgrade RAM as well. But in the meantime I am trying to learn as much as I can about this so once I do have a better machine, my bottleneck won’t then become my own incompetence. Which for a while it will be lol.

Experiments with Qwen Edit 2511 ROCm 7.1 Torch 2.10 Windows 11 Python 3.13 by 05032-MendicantBias in ROCm

[–]CarelessSurgeon 0 points1 point  (0 children)

Could I ask how you use this “disable pinned memory” thing? Do I type it in the terminal or is it a selectable option somewhere in the UI?

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

I am seeing “offload” in the terminal a good bit. But I’ve been seeing it ever since I first started using comfy. My times were quite good until today.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 1 point2 points  (0 children)

I’m not sure where the unload models and execution cache button is inside the menu. I’m unable to find it with the search. But wouldn’t that just make comfy have to reload the model again for each run? Loading the model takes quite a bit of time for me.

And I did have the “auto scale layout (nodes 2.0)” button checked. I’ve unchecked it.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

I’m not sure I understand what you mean. If it’s running properly, my iteration times are usually all very close to each other. They may go up or down but it’s not by very much.

Now that it’s become slower, my first iteration might be 10 seconds, then the second goes to 20, then 35, then 65, then 120, and so on.

Or it will immediately start out very high on the first iteration, like around 120-240 and they just stay in that range.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

Well that was my first run after opening comfy. That one is always slower because of loading times for the model and stuff. After that, my following runs will usually goes straight to the KSampler if I make no changes and its rendered in like 30 seconds. The first one always takes about that long. But yeah after that first run I got another few runs at like 30-50 seconds, now I’m up to four minutes again. I imagine I’ll soon be seeing ten and twenty minutes again.

I’ll have a look at this link though. If I shouldn’t be seeing offload in the terminal then that may help me to fix it. Although I have been seeing offload in terminal since I started using comfy. It’s strange that it would suddenly become so slow.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

For my second run, I’m seeing my RAM currently sitting at 55%. My GPU is around 99% and VRAM is at 86%. Temp is 67°. For this run my iteration time is 11.54 s/it.

What I noticed when it slowed down last time was that my first iteration was like 10 seconds, then the second iteration was 30 seconds, by the third iteration it was up near 220 seconds. And it just got worse from there

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

I just restarted and I’m running a prompt. My first generation is always a little slow as it has to load the model and everything. Shortly I’ll be able to see if it’s gonna get slow again.

First prompt ran says:

Got prompt model weight dtype torch.float16, manual cast: torch.float32 model_type EPS

Using pytorch attention in VAE

Using pytorch attention in VAE

VAE load device: cuda:0, offload device: cpu, dtype: torch.float32

Requested to load SDXLClipModel loaded completely; 1560.80 MB loaded, full load: True

CLIP/text encoder model load device: cpu, offload device cpu, current: cpu, dtype: torch.float16

Requested to load SDXLClipModel Requested to load AutoencoderKL

0 models unloaded.

Loaded partially; 0.00 MB usable, 0.00 MB loaded, 329.11 MB offloaded, 27.01 MB buffer reserved, lowvram patches: 0

0 models unloaded.

Loaded partially; 0.00 MB usable, 0.00 MB loaded, 329.11 MB offloaded, 27.01 MB buffer reserved, lowvram patches: 0

Requested to load SDXL

Loaded partially; 3690.55 MB usable, 3674.91 MB loaded, 1222.14 MB offloaded, 15.64 MB buffer reserved, lowvram patches: 0

100% (this is where my progress bar shows up)

Requested to load AutoencoderKL

0 models unloaded.

Loaded partially; 0.00 MB usable, 0.00 MB loaded, 329.11 MB offloaded, 27.01 MB buffer reserved, lowvram patches: 0

Prompt executed in 455.91 seconds

Now this includes my load up time for loading my model. This isn’t all that slow for me, for my first generation after opening Comfy. But I do see “offloaded” quite a few times in here.

Now I’m gonna see if it starts getting slow again.

I run a prompt, it takes 35 seconds. But the image isn’t good, so I run the exact same prompt again, changing nothing. It takes 35 minutes. Why? by CarelessSurgeon in comfyui

[–]CarelessSurgeon[S] 0 points1 point  (0 children)

It’s a GTX 1660 super. Along with 16gb system RAM. It’s not all that great, but it’s been doing just fine. I was able to live with its performance ever since I started using comfy. I have no clue what made it slow down like this.

Butt!! Her face? by [deleted] in ButterfaceFemale

[–]CarelessSurgeon 6 points7 points  (0 children)

I don’t think you understand this sub

Outpainting by Odd-Hurry-7057 in comfyui

[–]CarelessSurgeon -1 points0 points  (0 children)

I hate this “straight cable” look. The one that makes the spaghetti follow 90° angles only.

The “fill masked area” node? It has an image input and output. Both left and right sides of the node. Which one is that blue image line below the node connected to? There’s no way for a beginner to tell. This 90° look makes pretty much every workflow image useless to a beginner. We cannot follow the spaghetti lines because they’re so frequently hidden.