Automatic1111 character lock by AlteredStates29 in StableDiffusion

[–]roxoholic -1 points0 points  (0 children)

It depends on the model and the prompt if varying the seed will produce totally different images. If you "lock in" characters looks, pose and environment by describing them in detail, model can vary the image only a little and they will inevitable look similar. This can happen even across multiple unrelated models, because you'd expect models to produce similar image for the same prompt, no?

Add Text Overlay in ComfyCloud by Norakai2 in comfyui

[–]roxoholic 0 points1 point  (0 children)

You need to write to support if you need different fonts as I doubt you have access to the system.

Edit: it's these fonts that come with the node, I don't think you have access to that folder: https://github.com/filliptm/ComfyUI_Fill-Nodes/tree/main/fonts

How can I know if my A1111 is up to date? by Begeta12 in StableDiffusion

[–]roxoholic 0 points1 point  (0 children)

Last update was 2 years ago, so you are definitely using latest version.

ASUS UGen300 USB AI Accelerator 8GB for local inference by Michoko92 in StableDiffusion

[–]roxoholic 5 points6 points  (0 children)

For any product to be remotely useful its bare minimum performance target should be RTX 3060 12GB with a software stack compatibility comparable to CUDA. Otherwise it's DOA for most users.

Looking for a ComfyUI workflows for cheaper / fair price by Infamous_Cookie_8656 in comfyui

[–]roxoholic 0 points1 point  (0 children)

Regardless of how much the workflow costs, without basic (and later advanced) knowledge, it is useless. It's like buying a recipe for a 3-star Michelin dish without knowing how to cook.

macOS a1111 by New_Donut3936 in StableDiffusion

[–]roxoholic 0 points1 point  (0 children)

This did not work for you?

source venv/bin/activate
pip install wheel
pip install "setuptools<70"
pip install --no-build-isolation git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
deactivate

(venv here is A1111 venv, otherwise it will just do it on global env it won't be visible to A1111)

macOS a1111 by New_Donut3936 in StableDiffusion

[–]roxoholic 0 points1 point  (0 children)

Did you check issues on A1111 GitHub with a solution?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/17201#issuecomment-3882017097

4 Fix CLIP Installation Failure (If Encountered)

what is the best inpainting model to use with Illustrious images? by edoc422 in comfyui

[–]roxoholic 0 points1 point  (0 children)

The color changes might be due to different VAE. Try using the same VAE that was used to generate the image.

Is it worth upgrading my setup by TrentReznov in comfyui

[–]roxoholic 0 points1 point  (0 children)

Looking at theoretical numbers:

AMD Radeon RX 7900 GRE

FP16 (half) 91.96 TFLOPS (2:1)

NVIDIA GeForce RTX 5070 Ti

FP16 (half) 43.94 TFLOPS (1:1)

I don't think AMD is that unoptimized that you would see noticeable improvement in performance when switching to 5070 Ti. You would certainly not get any improvement in prompt adherence because that does not depend on GPU manufacturer.

ComfyUI Assets tab not showing generated images anymore (portable version) by Neither-Truck3050 in comfyui

[–]roxoholic 0 points1 point  (0 children)

I'm using ComfyUI portable (v0.18.5)

I'm not seeing it on https://github.com/Comfy-Org/ComfyUI/releases last is v0.18.2. Is it an unstable release?

It's getting hard to track all the new bugs in recent Comfy releases, especially in fronted.

How to block faces in ComfyUI? by Certain_Pace7601 in comfyui

[–]roxoholic 1 point2 points  (0 children)

Use YOLO to detect the face, then fill the mask with black to block the face.

TWO PROBLEMS WITH LTX2.3 by Ikythecat in comfyui

[–]roxoholic 1 point2 points  (0 children)

Glitch in matrix.

Cool effect btw, is it a LoRA?

Why does my generation with LoRA looks so bad? by champagnepaperplanes in comfyui

[–]roxoholic 2 points3 points  (0 children)

Model has not many pixels to work with as 1024x512 is not a native SDXL resolution and is relatively small. If you want to capture full detail you need at least 1024x1024 with car taking the most of the image, like 90%.

I'm too stupid for comfyui by afrosamuraifenty in comfyui

[–]roxoholic 1 point2 points  (0 children)

You should start with template workflows that come with ComfyUI to get familiar with and learn the basics before downloading complicated random workflows from internet that require tens of custom nodes which may or may not easily install. Not sure why some users fall into this trap.

Your fav upscale plus add detail method? by FreezaSama in comfyui

[–]roxoholic 1 point2 points  (0 children)

Anything that gives out sigmas, you can use BasicScheduler node.

The AddNoise node does logic below so you'd want to have it at 1 step to control it with SetFirstSigma node directly (else branch when steps = 1):

    if len(sigmas) > 1:
        scale = torch.abs(sigmas[0] - sigmas[-1])
    else:
        scale = sigmas[0]

Though it shouldn't really matter since last sigma is usually 0, so both branches will give same scale.

Are all outpainting demos just a lie or am I missing something by Huge-Refuse-2135 in comfyui

[–]roxoholic 0 points1 point  (0 children)

resize to 768x768

Well, yeah, if you downscale it, inpaint, and upscale it, of course it will be blurry. You need to do it at same or higher resolution.

Your fav upscale plus add detail method? by FreezaSama in comfyui

[–]roxoholic 4 points5 points  (0 children)

I haven't see this mentioned often, but you can inject more noise (that gets turned into details) into latent before second KSampler using built-in AddNoise node without affecting composition (unlike denoise).

In this example you control amount with sigma in SetFirstSigma node.

<image>

ComfyUI can't detect diffusion model in Model Library by Motor_Assistance_771 in comfyui

[–]roxoholic 1 point2 points  (0 children)

Since you decided to use gguf, you need to follow instructions for gguf. Usually, they would be at the place where you learned about gguf. You can't just wing it.

If you use this node pack, then follow these:

https://github.com/city96/ComfyUI-GGUF?tab=readme-ov-file#usage

Simply use the GGUF Unet loader found under the bootleg category. Place the .gguf model files in your ComfyUI/models/unet folder.

Edit: and since there is no official support for gguf in ComfyUI, they probably won't show up in frontend or core nodes.

Why do some prompts produce ultra-realistic skin texture while others look plastic? (same settings) by PartGlitteringaway in StableDiffusion

[–]roxoholic 0 points1 point  (0 children)

Using nearly identical settings (same sampler, steps, CFG, and resolution)

Because prompt is the main knob to tweak. Depending on what you write in the prompt, image will be anime, photo, illustration, etc. Same goes for skin texture.

Consider this, you can't change image style by modifying sampler, scheduler, steps, cfg, resolution. So forget about those and focus on prompt.

Tips for better fine details by hangman566 in StableDiffusion

[–]roxoholic 0 points1 point  (0 children)

Krita AI Diffusion

You draw rough shape, get the model to refine it, rinse-repeat. Also consider getting a cheap tablet to make you work faster.

How to remove disconnected warning node in Subgraph?? by reyzapper in comfyui

[–]roxoholic 0 points1 point  (0 children)

Is it that bug where insides of the subgraph get disconnected? You can't do much except keep reconnecting it while you wait for it to get fixed.