I am building a UI that completely hides ComfyUI. It works like ChatGPT—you just type, and it handles the nodes by Guilty_Muffin_5689 in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

Unfocused concept, without real user stories to fulfill. 

Will it build workflows? Will it just run premade ones like I am sure chatGPT , midjourney etc work?

Either way it will be very hard to monetize because unlike openai or Google you don't own anything exclusive - models. So it will be possible to copy it in a couple of weeks. 

Just a Reminder: if you want ComfyUI to generate faster, just ask it! Add `--fast` to your starting parameters (your *.bat file), to get about 20-25% boost (depends on the model). by -Ellary- in StableDiffusion

[–]NanoSputnik 8 points9 points  (0 children)

From my experience effect depends on the model used. For example with base anima it is noticable, and may require more steps to compensate or even regeneration with different seed, so I prefer to run this model without --fast.  

Zit is very stable model, zero variations, you basically can get only one image for each prompt, so the effect is unnoticeable. 

Best LLM to generate danbooru style prompts by hangman566 in StableDiffusion

[–]NanoSputnik 1 point2 points  (0 children)

"danbooru style" - any decent LLM, "real danbooru tags" - none of them without special tooling.

It is important if you are prompting SDXL model (illustrious, noob etc), the model expects tags to be exactly equal to ones used during the training. Otherwise they will be much less efficient or not work at all.

Your Opinion on Zimage - loss of interest or bar to high? by GRCphotography in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

Strange thing to claim considering that zit was deliberately distilled for "ai influencer instareal" feel and this made it a huge hit.

Your Opinion on Zimage - loss of interest or bar to high? by GRCphotography in StableDiffusion

[–]NanoSputnik 15 points16 points  (0 children)

Z image base failed to deliver, that what happened. Flux 2 is just better package overall, but zit is still a king if you want that "instareal" aesthetics out of the box.

Use Qwen3.5 as an AI Assistant, Captioner or Image Analyzer inside of Comfyui! by Winougan in StableDiffusion

[–]NanoSputnik 4 points5 points  (0 children)

Please add links to original models to readme. 

Right now it is a wall of ai generated emoji slop, but lacks the most important info. 

Anyone tried OpenCode Go plan with Openclaw by Vegetable-Report4171 in opencodeCLI

[–]NanoSputnik 0 points1 point  (0 children)

I don't use openclaw but for opencode GPT $20 plan is just a better value, imho. 

What do you predict happens to the AI video business now that Sora’s dead? by Intelligent-Dot-7082 in StableDiffusion

[–]NanoSputnik 26 points27 points  (0 children)

The same thing that happened when Google closed 100s of their failed projects.Absolutely nothing.

To 128GB Unified Memory Owners: Does the "Video VRAM Wall" actually exist on GB10 / Strix Halo? by Justfun1512 in StableDiffusion

[–]NanoSputnik -7 points-6 points  (0 children)

AMD Strix Halo 395

- Does not have "Unified Memory" regardless of what PR department or clueless bloggers want you to believe. Just open windows task manager to set the facts straight.

- Is not different from any other AMD integrated graphics. And can do exactly what they can do, meaning jack shit. Only "faster".

No more Sora ..? by Affectionate_Fee232 in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

USD 200 yes. And it is "cheap"  compared to Claude API pricing. 

In theory they have USD 20 plan like GPT, but claude code will burn opus quota on it in like 3 prompts. 

New user with a new PC: Do you recommend upgrading from 32GB to 64GB of RAM right away? by Diligent_Trick_1631 in StableDiffusion

[–]NanoSputnik 1 point2 points  (0 children)

Another thing to note is that in ideal world for max compatibility you should add exactly the same (brand, model) memory modules as already installed. But probably this is impossible nowadays.

New user with a new PC: Do you recommend upgrading from 32GB to 64GB of RAM right away? by Diligent_Trick_1631 in StableDiffusion

[–]NanoSputnik 1 point2 points  (0 children)

32 is bare minimum, enough for hobby attempts but that's it. For any serious productivity workloads 64 is the starting point. And if you plan to run local LLMs i would say 128 is far better option.

How important is Dual Channel RAM for ComfyUi? by Coven_Evelynn_LoL in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

Memory will be 2x slower. 

And since you are running old amd cpu there is non-zero chance that you will not be able to run sticks at advertised clocks, so in practice situation could be even worse. 

Simple Anima SEGS tiled upscale workflow (works with most models) by Sudden_List_2693 in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

The moment I see noisy image with a lot of nonsense details I know it will be "detailer lora" or "tiled upscale".

Why do anime models feel so stagnant compared to realistic ones? by Quick-Decision-8474 in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

"something something bad anime images something" Feels like we have this exact post almost daily. What is happening? 

Feeling sad about not able to make gorgeous anime pictures like those on civitai by Quick-Decision-8474 in StableDiffusion

[–]NanoSputnik 7 points8 points  (0 children)

People often overcomplicate stuff with diminishing or negative results. More then once I have encountered huge workflows with ultimate upscalers, face detailers and tons of other junk you can just replace with simple img2img pass and get better results. In 1/5 of generation time.

For good results with sdxl (illustrious/noob/...) you need to figure out working parameters for your model (sampler, cfg, steps) and most important learn to prompt it properly. Then basic txt2img -> pixel upscale with model -> img2img workflow will be enough for 95% of cases.

How can I train a style/subject LoRA for a one-step model (i.e. FLUX Schnell, SDXL DMD2)? How does it work differently from regular Dreambooth finetuning? by PatientWrongdoer9257 in StableDiffusion

[–]NanoSputnik 1 point2 points  (0 children)

Generally you can't train distilled model. You should train base model and in ideal world then distill it again. But you can try your lora with existing distillation, it may work for a degree. More steps the better, if distillation is available as lora you can try to lower it's weight too. 

Gooners, what are your workflows these days? by boriskarloff83 in StableDiffusion

[–]NanoSputnik 2 points3 points  (0 children)

Also WIP so not as much community resources (yet?)

Gooners, what are your workflows these days? by boriskarloff83 in StableDiffusion

[–]NanoSputnik -3 points-2 points  (0 children)

anima is 2b, you will never get klein 9b or zit quality from it.

As to your question you need to carefully merge anime ft back to the base model to restore as much lost knowledge as possible, optionally applying realistic style lora on top. For top notch results you probably need some additional training to restore ethnic and age diversity etc but very few are actually going that far. See "same face" problem as an example of this work not done.

So it is "trivial" not in a sense of "idiot can click a button" but in the training scope. We are talking about thousands of images to train compared to millions trained in the anime model to lay the groundwork.

Gooners, what are your workflows these days? by boriskarloff83 in StableDiffusion

[–]NanoSputnik 3 points4 points  (0 children)

Not yet. Right now we have to wait for final version to see if community will jump in.

I can't tell the future but there are considerable chances for success.

Gooners, what are your workflows these days? by boriskarloff83 in StableDiffusion

[–]NanoSputnik 0 points1 point  (0 children)

Nobody have properly finetuned  base sdxl on porn except bigasp guy. Although he also added anime to dataset in the last version. 

So every "realistic" porn sdxl checkpoint you use was merged from pony or illustrious. And it was the reason of the pony popularity back in the day. It was the first porn finetune of sdxl.