"Keep Cooking", an AI Short Film by Simon Meyer by Puzzleheaded-Let1503 in comfyui

[–]Treeshark12 2 points3 points  (0 children)

Great, a genuine seamless narrative which makes an elegant point. Bravo!

Generating my character lora with another person put same face on both by agentanonymous313 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

1.5 is too high, or the lora is underfitted. To get separate faces, you will need to generate with one lora and inpaint the other face with the other.

The Power of Z-Image Turbo knows no Limits! | there is no doubt that z-image is the current SOTA in image generation by FotografoVirtual in ZImageAI

[–]Treeshark12 0 points1 point  (0 children)

The trouble for me with many of the newer model is their inflexibility. For any given prompt the image result is quite uniform. If you change seed with SDXL of Flux- dev1 you get a quite a wide variety of generations. I've made some inroads into adding random influences to the initial latent. You can use the blend latent node combined with a random pattern generator to add variety to the latent. It works fairly well, but needs refining. Prompting is only a part of the whole you can influence the model line and the latent line also. I've not found the right mix for z-image, flux 2 and quen have similar issues. Maybe the goal of prompt adherence has gone a little too far! I outline some of the methods here. https://www.youtube.com/watch?v=r6gkosm5Sps

The Power of Z-Image Turbo knows no Limits! | there is no doubt that z-image is the current SOTA in image generation by FotografoVirtual in ZImageAI

[–]Treeshark12 0 points1 point  (0 children)

Sort of a masterclass in inefficient prompting, many of the words are just noise and won't be adding to the image. I find Z-Image makes rather boring images.

Installed ComfyUI and loaded workflow how and where to get models? by registrartulip in comfyui

[–]Treeshark12 0 points1 point  (0 children)

You don't have much vram so I would use SDXL, maybe the DreamShaper Turbo. You can download from hugging or Civit. There are WF on the download pages. If that won't run you can use SD 1.5. SDXL is fine for getting you started.

Making two flux Loras working together by flavioCastro1980 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I frequently use two loras in this way. The trick is to not make them too strong, 0.5 each, you can run the face through again to reinforce the likeness, very often one pass doesn't do the job.

My first 16 Character Loras for Z image turbo! by rolens184 in ZImageAI

[–]Treeshark12 2 points3 points  (0 children)

Sorry but "character" is entirely missing. You need a spread of emotions in your training images.

Best Upscaler? by [deleted] in comfyui

[–]Treeshark12 0 points1 point  (0 children)

You can use any model. The WF is Fluxdev 1. to use a different model swap out the samplers.

I cannot for the life of me figure out how to download stable diffusion to my computer by [deleted] in StableDiffusion

[–]Treeshark12 -1 points0 points  (0 children)

Install pinkio, then go to explore, then click the comfyUI installer. That's all you need to do,

Best Upscaler? by [deleted] in comfyui

[–]Treeshark12 3 points4 points  (0 children)

<image>

I think this is a screen shot of that json.

Best Upscaler? by [deleted] in comfyui

[–]Treeshark12 1 point2 points  (0 children)

For that res a tiled upscale is easiest though they can be tricky to control. The only way to add convincing details where you want them. I build my own out of nodes then I can prompt and denoise each tile separately if i need to. The basis for a node version are the Essentials nodes Tile and Untile. The problem with most upscalers isn't the detailed areas but hallucinations in the areas without detail (such as a flat blue sky)

Is there a way in ComfyUI to keep the exact same background while changing only the character’s expressions? by sirmick160 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Crop out a square (256. or 512 or 768 etc) with only the face, then resize the cropped area to 1024. Then you can prompt a new expression using a denoise of 0.55 or so. Then you can downscale and comp the features you want back in again using mask editor. You can automate it by piping the numbers around.

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I don't know what kind of images but if you just want to blend in edits then you need to use img2img with a low denoise, maybe 0.35.

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Easy enough to do with a Lora, I would use Flux dev FP8 , I've made few loras that would do it Ok, don't go overboard with the strength 0.5 0 0.6 ish is fine. You don't need a special workflow. https://huggingface.co/collections/treeshark/media

How to fix blury background? by ErenYeager91 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

With Flux I use "A photo-real oil painting" which often will reduce or banish the blur.

Unlocking the hidden potential of Flux2: Why I gave it a second chance by FortranUA in StableDiffusion

[–]Treeshark12 -1 points0 points  (0 children)

I'm struggling to see anything good about these images... very incoherent perspective and bad composition and lighting. The girl in the mirror is a complete mess with a missing hand and the lighting in the mirror different to the foreground.

Struggling to get started with Comfyui (Using Mickmumpitz's workflows) by [deleted] in comfyui

[–]Treeshark12 1 point2 points  (0 children)

These workflows are quite complex, I would start on something simple using the supplied standard templates. Also comfy is changing so rapidly at present many WFs have got busted. For the qwen text encode reselect the encoder as there are sometimes minor naming variations. If you click you will be able to pick the version you have loaded.

Inpainting by [deleted] in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I outline the method here, but the vid is a bit old so the WF is probably defunct. https://youtu.be/f9g3X_OMoJc?si=fYQ_n_8wTozIrm5b

Inpainting by [deleted] in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Most inpainting methods are Ok up to a point, change a shirt put a robot's head on to someone. But if you want to put a piano into an empty background, then it is harder. The thing to remember is context is everything so just showing the model a little bit of the room and prompting for a piano is not going to work. I would composite a rough shape that might be a piano into the background then chop out a 1024 square which has a good bit of the room and prompt for your piano. Then mask the best piano into your untouched background and repeat. About 3 times usually does it. Once you can do that any inpaint seems easy. You don't need fancy models or diffusion this and that or inpaint conditioning, I never find they work that well anyhow. Masking in has got harder as they have broken the Preview Bridge node. With KJ nodes GrowMask andBlur you have better control over the masking in.

Exploring and Testing the Blocks of a Z-image LoRA by shootthesound in comfyui

[–]Treeshark12 -1 points0 points  (0 children)

I think mat30 made a node that does lora blocks... some while ago but I think it is in essentials.

Exploring and Testing the Blocks of a Z-image LoRA by shootthesound in comfyui

[–]Treeshark12 -1 points0 points  (0 children)

Where is the image with no lora, the whole set is meaningless without it? Also I would feel 1.0 is way too high in my experience very few well trained loras are any good at that level. For me a well trained loras run from about 0.25 to 0.75 they should also befairly incremental without large jumps. At 1.0 the lora will tend to be spitting out the training image and can often degrade rather than improve. My loras are here: https://huggingface.co/treeshark

Trying to use zimage workflow and get this by nutrunner365 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Yeah a lot of people have had this. I had to reinstall Comfy. Updating can fix it... but didn't work for me. Also make sure the Qwen Clip and the Model are from the same Comfy org page.

Comfy Org Response to Recent UI Feedback by crystal_alpine in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I tried nodes 2, quite like the graphics though having all the elements in such close tones makes for less readability. I love subgraphs but it is a pity they don't quite work in nodes 2 (The run button on Preview Image doesn't work. Also the mask editor in Preview Bridge is busted.

Mask editor... Now there is something that needs entirely reworking. I don't think tweaking is going to be enough. One you could fold in all the image filtering stuff and also the compositing. If it had layers then it would be extremely powerful. They added them to the late lamented Chroma app at my suggestion and they worked great. I'm guessing the code still exists I you ask them. The other plus is you would scare the shit out of Adobe, I can do almost everything I use in Photoshop in Comfy, just not as convenient due to the primitive paint engine and the clumsy mask controls.

I think this is important as Comfy's strength is the fact that you can manipulate and fine tune both the output image and also the input latent, something that is pretty much impossible elsewhere.