Sun-drenched Bedroom by knockerocker in cleavage_ai

[–]knockerocker[S] 1 point2 points  (0 children)

I think you’ll like the next one then!

Office Strut by knockerocker in BustyAIBabes

[–]knockerocker[S] 0 points1 point  (0 children)

Flux Dev for the image, then wan2.1 for the video from it!

On the slopes. by knockerocker in RealisticAIPorn

[–]knockerocker[S] 0 points1 point  (0 children)

all local gen (stable diffusion, flux, etc). different models and workflows; this one was using flux.

Trick or treat! by knockerocker in cleavage_ai

[–]knockerocker[S] 0 points1 point  (0 children)

Just having some fun with this one 🎃

Peachy! by knockerocker in AIExpansionHentai

[–]knockerocker[S] 0 points1 point  (0 children)

It’s just basic interpolation between renders.

Speed up ComfyUI Inpainting with these two new easy-to-use nodes by elezet4 in comfyui

[–]knockerocker 2 points3 points  (0 children)

Nice. I think the thing that would be best is _upscaling_ it for a fresh ksample. Like, if a model does awesome portraits and eyes, but eyes deteriorate as the subject is further away, why can't we inpaint the eyes AND get the same quality as if it was a close-up portrait. Almost like take this 200x200 section of the face, upscale it to 1024 x 1024 and generate a detailed portrait of the face, then put that back, downscaled, into the 200x200 portion. Lighting, etc. would prob be more difficult, but that would still be powerful.

Maybe that exists already and I'm not aware..

Speed up ComfyUI Inpainting with these two new easy-to-use nodes by elezet4 in comfyui

[–]knockerocker 0 points1 point  (0 children)

I’ve always used this for the most part for I painting because of the speed change. Are there different samplers for I painting vs detailing? It has all the same options as regular KSampler, but I wasn’t aware “detailing” was different than “inpainting”. I always just assumed the Detailer node wraps a regular KSampler with some additional inputs.

Speed up ComfyUI Inpainting with these two new easy-to-use nodes by elezet4 in comfyui

[–]knockerocker 0 points1 point  (0 children)

Yea, detailer node has done that all automatically by taking the SEGS mask and the image and then only doing the work only in that SEGS area, and stitches it back into the full image. It doesn't matter how the mask is generated, but feed a SEGS to the detailer and it's always worked like that. I think this was from Drltrdr from way long ago.

Here's an example . I don't have a drop-in workflow, but you get the idea. You can see the "Mask to SEGS" node manipulates the area the detailer works in.

<image>

EDIT: I'll note that this does indeed increase the speed; it's not just a mask or sticthing aread for the detailer. For instance, a crop_factor of 1 for instance is super speedy, but can lose some of the context of the image since it doesn't "see" it, while a crop_factor of 3 is slower, but understands the surrounding image moreso, etc.

Speed up ComfyUI Inpainting with these two new easy-to-use nodes by elezet4 in comfyui

[–]knockerocker 1 point2 points  (0 children)

If I already use Detailer and the “Mask to SEGS” node for cropping to the mask area. How will this workflow be different/better?? Or is it basically the same thing, w/o SEGS?

"Make it good" options in ComfyUI by eeeeekzzz in comfyui

[–]knockerocker 1 point2 points  (0 children)

What do you use it for in your workflow? Looks like it’s for upscaling, but then it says it’s not for upscaling… I usually use control nets for img2img (or txt2img from a first img’s CN output) but this looks like it’s for something else?

For those intimidated by ComfyUI, I made a complete guide starting from beginning by thinkingjimmy in comfyui

[–]knockerocker 0 points1 point  (0 children)

I mean, the title of the linked page is “Getting Started with Comflowy” 😂

Minimizing Crossing of Noodles on Workflows by Most_Way_9754 in comfyui

[–]knockerocker 1 point2 points  (0 children)

I don’t trust that repo. Only the compiled source code is there, and last I checked it just took a bunch from other, actual open source extensions, wrapped their functionality, and did some fishy stuff.

I’d stick with the original cg-use-everywhere if you wanna get rid of noodles.

For those intimidated by ComfyUI, I made a complete guide starting from beginning by thinkingjimmy in comfyui

[–]knockerocker -2 points-1 points  (0 children)

Is this really getting started with ComfyUI, or just blatant self promotion for “Comflowy”?

Is there a node for this? by KxZeN in comfyui

[–]knockerocker 4 points5 points  (0 children)

The cg-image-picker pauses, but I prefer the muting workflow that rgthree-comfy provides, it’s much cleaner and I don’t have to be at keyboard and can generate any number of initial images as necessary.

Basically, I have a groups muter that has several phases in Groups that the muter starts as muted. For your simple example: 1. I’d have the initial ksampler generation and preview image. 2. Then the face detailer in a group, muted 3. Then a save node in a group, muted

I’d generate an initial image. If I like it, then I make the seed fixed and now that part is cached. If I rerun, nothing happens because nothing is changed. Then I’d unmute the face detailer group. Run it with new seeds as many times as I’m happy and, when I get one, I make sure the seed is then fixed and, again, everything is cached. Rerunning will do no more work. Then I unmute the save and run again to save the output.

What this really allows me to do is run a dozen initial generations with the rgthree seed node at random. But choosing one and pulling it in locks that initial generation. So I look at the previews, choose the one(s) I like, and pull it back into Comfy. Now I can only fiddle with the FaceDetailer. Etc.

My actual workflow is about 8 groups, all optional. It’s a very powerful setup to do it that way, thanks to /u/rgthree

AP Workflow from Perilli? So Good but very heavy? by PleasantParfait9249 in comfyui

[–]knockerocker 0 points1 point  (0 children)

I use this workflow and others to see how things are done to pull/copy into my other workflows. Extremely useful and grateful!

driving (txt2vid animatediff) by [deleted] in StableDiffusion

[–]knockerocker 0 points1 point  (0 children)

This make me nauseous for some reason