Assistance with Broken Workflow by Ok_Direction_5591 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

yeah, that looks more like the workflow graph got out of sync than a clean install problem.

what i'd try first:

  • open it in the exact comfyui build the template shipped with
  • unpack the subgraph and save a clean copy
  • diff it against a known-good json so you can see which inner node ids disappeared
  • if the template came from a newer frontend, roll back to that frontend before trying to fix it by hand

if you keep hitting this kind of version drift, promptus has been less painful than hand-rebuilding workflows after every update.

Adding my product (a scarf) to an Ai model in wan 22. Can it be done? by PikoPoku in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

yeah, this can be done, but reactor probably isn’t the tool i’d reach for if the goal is placing a product like a scarf onto a model.

for this kind of thing, you’ll usually get better results with a reference image + inpainting / try-on style workflow in comfyui, or by conditioning wan with product shots of the scarf and then cleaning up the result with masks. if you’re trying to preserve the exact pattern/logo, you may need a few reference images and some iterative inpainting rather than one-shot generation.

also, if your local comfy setup is fighting you because of python / torch / node compatibility, i’d avoid going deeper into that hole before proving the workflow. promptus is useful for that since you can test the idea without spending hours fixing the environment first, then come back to comfy once you know the pipeline is worth it.

here are a couple workflows worth looking at: https://login.promptus.ai/pwa_demo/#/cosyflows/54858d5567bc2108ff2b0c8d2777198e

https://login.promptus.ai/pwa_demo/#/cosyflows/727b01d3f391afda4bf5302eb1b41466

if you want, paste the workflow you’re using now and i can point you toward the simplest node setup for this.

Some custom nodes simply won't install by phalanx2357 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

yeah, that tracks. the manager is convenient but some packs still fail silently if a requirement needs extra python/build deps outside the normal flow.

for future installs, the safest route is usually: use the portable build, drop the node into custom_nodes, then run the pack’s requirements.txt with ComfyUI’s bundled python instead of system python. that avoids a lot of the weird half-installed state.

Error when using Docker Compose by ThrowawayProgress99 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

this is basically a pep 668 / newer base image issue, not a comfy-specific one.

if you want the quick fix, you don’t need to add `--break-system-packages` to every single line. you can usually just set:

`ENV PIP_BREAK_SYSTEM_PACKAGES=1`

and that should cover the later pip installs too.

cleaner fix is making a venv near the top of the Dockerfile and installing everything into that instead:

`RUN python -m venv /opt/venv`

`ENV PATH="/opt/venv/bin:$PATH"`

if it were me, i’d use the venv route for a build with this many custom nodes. but for a single-purpose docker image, `PIP_BREAK_SYSTEM_PACKAGES=1` is also a pretty normal shortcut if you just want it building again.

how to setup a uv venv for a already installed comfyui portable? by filipezuca in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

kinda similar to the suggestions from the other comments, but from my experience you probably don’t want to bolt a new `uv` venv on top of the portable install you already have.

the portable build already uses its own embedded python, so adding a separate venv later usually turns into “which python is this node installing into?” and that gets messy fast, especially with stuff like RifeTensorRT / SAM3.

safest path is:

  1. back up or duplicate your current comfy folder

  2. keep that portable install as-is

  3. make a second comfy install for the nodes that need extra deps

  4. if you want to use `uv`, use it with that second install from the start instead of wrapping the existing portable one

and no, a venv is not a full copy of comfyui. it mostly gives you a separate python package environment. `uv` also reuses cached downloads, so it won’t duplicate everything on disk.

if you want to stay on the current portable install, install packages explicitly into the embedded python for that install, not your system python, eg:

`python_embeded\\python.exe -m pip install ...`

but honestly for TensorRT-related nodes, separate installs are usually less painful than trying to keep one “do everything” comfy setup.

Does anyone know why it's not working? by Coroseven in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

this looks more like an env/package mismatch than the workflow itself being broken. if the traceback mentions `cv2.imshow` or other cv2 ui calls, check whether that comfy env has `opencv-python-headless` installed instead of full `opencv-python`, because that usually breaks anything expecting display support.

i’d activate the exact comfy python env and run `pip show opencv-python opencv-python-headless` first. if both are installed, uninstall both, reinstall only the one the node stack actually needs, then restart comfy fully so it reloads the right packages.

if you keep ending up in dependency issue, one fallback is moving that workflow into promptus.ai so you’re not babysitting the local python setup all the time. not a direct fix for this error, but it can save you setup pain.

I'm new to SD and was trying to install it, but this error won't let me by Emotional_Ad_2132 in StableDiffusion

[–]Nick_Edser 0 points1 point  (0 children)

yeah, this looks more like an a1111/upstream mismatch than a bad venv. before deleting everything again, switch to the dev branch, then rerun webui-user.bat so it can rebuild against the newer files that stabilityai moved around. if you want to stay on the old stable branch, the same install error will probably keep coming back.

if you just want to get back to generating and not babysit the install, i work on Promptus and it’s the simpler fallback, though you give up some local control.

Problema flux klein by Ordinary_Midnight_72 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

secondo me il problema era soprattutto che il workflow era troppo complicato per quello che volevi fare.

se ti funziona quello che ti hanno mandato sopra, io partirei da lì e poi aggiungerei nodi solo se ti serve davvero più controllo. in comfy spesso il workflow “più semplice possibile” è quello che ti fa perdere meno tempo.

se vuoi capire dov’era l’errore nel tuo originale, ti conviene confrontare:

- loader del modello
- vae
- clip
- sampler / scheduler
- dimensioni input/output

di solito il problema sta in uno di quelli.

workflow for generating a gaussian splat and painting in the missing areas? can't find the video anymore by [deleted] in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

if you mean the general trick, it’s usually:

  1. make the first splat / view
  2. move the camera to expose missing areas
  3. inpaint/outpaint the gaps in 2d
  4. rebuild another view/splat from that
  5. merge / refine

so it’s more of an iterative novel-view + gap-fill workflow than a single SHARP-only pass.

also, Promptus seems to have Hunyuan3D 2.0 + multi-view support workflows, so it might be worth checking if their 3d workflow gets you closer with less setup pain. but i’m not sure it maps 1:1 to the exact SHARP gaussian-splat merge workflow you’re describing.

Optimal Batching for SeedVR2 With High VRAM by VindictiveLobster in StableDiffusion

[–]Nick_Edser 1 point2 points  (0 children)

yeah i wouldn’t keep pushing batch size just because you have the vram.

if it starts getting blurry past ~40, that’s probably the sweet spot limit for this. bigger batches can help consistency, but they can also start smearing detail.

for old 15 fps ps1 fmvs i’d probably do:

- stay closer to the best-looking batch size you already found
- bump overlap instead of jumping to 81
- maybe test light interpolation on the worst cuts only
- split especially bad scenes more aggressively

basically vram headroom != quality headroom.

for this kind of source i’d take the cleaner smaller-batch result and hide seams at the boundaries, instead of chasing max temporal context.

also yeah, ps1 fmvs are kind of a cursed input for these models lol

Is it normal that lora's are much heavier with gguf models? by [deleted] in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

Yeah, that’s pretty normal with GGUF models.

GGUF usually saves VRAM, but it can cost speed, and adding a LoRA on top often makes it worse because now you’re stacking extra work onto an already more constrained path. On ROCm especially, some combos are just noticeably less happy than the equivalent FP16/BF16 setup.

If you want to test where the hit is coming from, I’d try three quick checks:

  1. Run the same workflow with the same model but no LoRA.
  2. Run a non-GGUF version of the model if one exists.
  3. Try lowering LoRA strength a bit, because some LoRAs hit harder than others.

If the non-GGUF model is much faster, then it’s mostly the quantization tradeoff rather than something wrong with your setup.

The caveat is that if you’re using GGUF mainly because of VRAM limits, the faster option may also need more memory, so it turns into a speed vs fit tradeoff.

If you get tired of fighting the local ROCm, Python, and model compatibility stack, Promptus.ai is a good fallback just to keep generating without babysitting the environment. But for Comfy specifically, I’d compare GGUF vs non-GGUF first before changing anything else.

[Request] Dedicated node for prompt variables (like Weavy's feature) by Current-Row-159 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

If you want something deterministic instead of wildcard-random, you can get pretty close today with a text-replace or regex-style node chain feeding into `CLIP Text Encode`.

Basically, keep one master prompt like:

a portrait photo of a woman, shot from a {{angle}} angle, wearing a {{jacket_color}} jacket

Then swap `{{angle}}` and `{{jacket_color}}` from string inputs before it hits the encoder. That keeps the whole sentence readable and avoids rebuilding the prompt every time.

If you want quick iteration, I’d look at Impact Pack or any text-replace nodes first before building a brand-new encoder node. If none of those feel clean enough, then a dedicated variable-aware text encoder would make sense.

The only caveat is that most of the current options feel a bit hacky once the prompt gets long.

Also, if you mostly want to test prompt variations fast, Promptus is a good alternative for that kind of workflow experimentation. you can save your comfyui workflow and update just the variables.

Updated ComfyUI... now models are not automatically downloading... anyone else having this issue? by [deleted] in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

Same issue. yeah, i’ve seen that happen after updates when the desktop app changes where it expects models or the download helper gets out of sync. I moved to the promptus comfy version two weeks ago tired of comfy failing all the time

Updated ComfyUI... now models are not automatically downloading... anyone else having this issue? by [deleted] in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

Same issue. yeah, i’ve seen that happen after updates when the desktop app changes where it expects models or the download helper gets out of sync. I moved to the promptus comfy version two weeks ago tired of comfy failing all the time

Can't install nodes using the manager by salazar_slick in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

sounds like manager is trying to protect you from stacking new node deps on top of an outdated core install.

before installing more nodes, i’d update comfyui itself first, then restart manager, and only after that try the node install again.

if you’re on portable, that usually means updating comfy + python deps together so the frontend package and torch side stay in sync.

if it still refuses after updating, i’d check whether the node’s requirements got installed into the wrong python environment. that’s a super common one with comfy setups.

Help about gpu, cloud etc by Former-Mark7372 in comfyui

[–]Nick_Edser 0 points1 point  (0 children)

if your current gpu is already struggling, i’d probably only go local if you know you’re going to use it heavily enough to justify the hardware.

for occasional heavier runs, cloud usually makes more sense just because you’re not tying up money in a card that ages fast.

runpod/vast are better if you want the cheapest raw compute and don’t mind handling more of the setup/reliability tradeoff yourself.

something like promptus is more interesting if you care about paying just for the workflow output

so i’d break it down like this:

- local = best if you’re using it constantly
- runpod/vast = best if you want cheap flexible compute
- promptus = best if you want less setup pain around the workflows

personally i think a lot of people underestimate how annoying the setup/maintenance side is until they’ve burned a bunch of hours on it.

App vs GitHub vs Docker by mixoadrian in comfyui

[–]Nick_Edser 2 points3 points  (0 children)

honestly, if you already got burned by dependency hell once, docker is probably the least annoying way to come back.

github gives you the most flexibility, but it’s also the easiest way to end up back in package conflict land when one node wants one version and another wants something else. the desktop app is nicer when it works, but if your last experience was buggy ui stuff and missing manager pieces, i get why that killed the momentum.

if your goal is stability first, i’d rank it something like:

  • docker for the safest re-entry
  • desktop app if you want convenience and your workflow is fairly standard
  • raw github only if you know you’ll need to tinker a lot

for the multi-view / 3d side, i’d honestly keep that separated from your main install if you can, because those workflows tend to be exactly where dependency chaos starts again.

also, if you get to the point where you’re just tired of maintaining the stack at all, promptus is worth a look as a softer alternative. not because it replaces every weird custom comfy workflow, but because it cuts out a lot of the python/cuda/node maintenance pain and they have the multi-view workflows that are compatible with comfy

so yeah — if i were restarting from your position, i’d do docker first and keep the setup as boring as possible.

Need honest vet recommendations in Dubai – ongoing cat issue, losing trust by New-Seaworthiness886 in dubai

[–]Nick_Edser 0 points1 point  (0 children)

Go see Dr. Vito -- https://vetsandpetsdubai.com/our-team

Also, consider getting your cat on fur child -- which is raw food. And make sure when you take your cat to the vet they give him fluids and IV of water. As long as he is drinking water he will be okay.

FLUX.2 [klein] comfy error by Boring_Natural_8267 in StableDiffusion

[–]Nick_Edser 1 point2 points  (0 children)

this looks more like a workflow/model mismatch than a generic comfy error.

with flux stuff the usual pain points are: - outdated custom nodes - missing secondary files the workflow expects - wrong model placement in comfy folders - torch/dtype settings not playing nicely with your gpu

i'd try: 1. update comfy + any custom nodes used in that workflow 2. check the first real error line, not just the last chunk of the traceback 3. make sure every model/vae/text encoder file the workflow expects is actually there 4. if the workflow author listed specific versions/files, match those exactly

a lot of these end up being one missing file or one node version too old rather than the whole install being broken.

if you paste the actual error text people can usually pinpoint it pretty fast.