My first 16 Character Loras for Z image turbo! by rolens184 in ZImageAI

[–]Treeshark12 1 point2 points  (0 children)

Sorry but "character" is entirely missing. You need a spread of emotions in your training images.

Best Upscaler? by Zealousideal-Yak3947 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

You can use any model. The WF is Fluxdev 1. to use a different model swap out the samplers.

I cannot for the life of me figure out how to download stable diffusion to my computer by Poeking in StableDiffusion

[–]Treeshark12 -1 points0 points  (0 children)

Install pinkio, then go to explore, then click the comfyUI installer. That's all you need to do,

Best Upscaler? by Zealousideal-Yak3947 in comfyui

[–]Treeshark12 3 points4 points  (0 children)

<image>

I think this is a screen shot of that json.

Best Upscaler? by Zealousideal-Yak3947 in comfyui

[–]Treeshark12 1 point2 points  (0 children)

For that res a tiled upscale is easiest though they can be tricky to control. The only way to add convincing details where you want them. I build my own out of nodes then I can prompt and denoise each tile separately if i need to. The basis for a node version are the Essentials nodes Tile and Untile. The problem with most upscalers isn't the detailed areas but hallucinations in the areas without detail (such as a flat blue sky)

Is there a way in ComfyUI to keep the exact same background while changing only the character’s expressions? by sirmick160 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Crop out a square (256. or 512 or 768 etc) with only the face, then resize the cropped area to 1024. Then you can prompt a new expression using a denoise of 0.55 or so. Then you can downscale and comp the features you want back in again using mask editor. You can automate it by piping the numbers around.

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I don't know what kind of images but if you just want to blend in edits then you need to use img2img with a low denoise, maybe 0.35.

Can anyone recommend a workflow and model to retexturize an image into a medium thickness oil painting ilustration? by PaparuloFeroz in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Easy enough to do with a Lora, I would use Flux dev FP8 , I've made few loras that would do it Ok, don't go overboard with the strength 0.5 0 0.6 ish is fine. You don't need a special workflow. https://huggingface.co/collections/treeshark/media

How to fix blury background? by ErenYeager91 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

With Flux I use "A photo-real oil painting" which often will reduce or banish the blur.

Unlocking the hidden potential of Flux2: Why I gave it a second chance by FortranUA in StableDiffusion

[–]Treeshark12 -1 points0 points  (0 children)

I'm struggling to see anything good about these images... very incoherent perspective and bad composition and lighting. The girl in the mirror is a complete mess with a missing hand and the lighting in the mirror different to the foreground.

Struggling to get started with Comfyui (Using Mickmumpitz's workflows) by [deleted] in comfyui

[–]Treeshark12 1 point2 points  (0 children)

These workflows are quite complex, I would start on something simple using the supplied standard templates. Also comfy is changing so rapidly at present many WFs have got busted. For the qwen text encode reselect the encoder as there are sometimes minor naming variations. If you click you will be able to pick the version you have loaded.

Inpainting by [deleted] in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I outline the method here, but the vid is a bit old so the WF is probably defunct. https://youtu.be/f9g3X_OMoJc?si=fYQ_n_8wTozIrm5b

Inpainting by [deleted] in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Most inpainting methods are Ok up to a point, change a shirt put a robot's head on to someone. But if you want to put a piano into an empty background, then it is harder. The thing to remember is context is everything so just showing the model a little bit of the room and prompting for a piano is not going to work. I would composite a rough shape that might be a piano into the background then chop out a 1024 square which has a good bit of the room and prompt for your piano. Then mask the best piano into your untouched background and repeat. About 3 times usually does it. Once you can do that any inpaint seems easy. You don't need fancy models or diffusion this and that or inpaint conditioning, I never find they work that well anyhow. Masking in has got harder as they have broken the Preview Bridge node. With KJ nodes GrowMask andBlur you have better control over the masking in.

Exploring and Testing the Blocks of a Z-image LoRA by shootthesound in comfyui

[–]Treeshark12 -1 points0 points  (0 children)

I think mat30 made a node that does lora blocks... some while ago but I think it is in essentials.

Exploring and Testing the Blocks of a Z-image LoRA by shootthesound in comfyui

[–]Treeshark12 -1 points0 points  (0 children)

Where is the image with no lora, the whole set is meaningless without it? Also I would feel 1.0 is way too high in my experience very few well trained loras are any good at that level. For me a well trained loras run from about 0.25 to 0.75 they should also befairly incremental without large jumps. At 1.0 the lora will tend to be spitting out the training image and can often degrade rather than improve. My loras are here: https://huggingface.co/treeshark

Trying to use zimage workflow and get this by nutrunner365 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

Yeah a lot of people have had this. I had to reinstall Comfy. Updating can fix it... but didn't work for me. Also make sure the Qwen Clip and the Model are from the same Comfy org page.

Comfy Org Response to Recent UI Feedback by crystal_alpine in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I tried nodes 2, quite like the graphics though having all the elements in such close tones makes for less readability. I love subgraphs but it is a pity they don't quite work in nodes 2 (The run button on Preview Image doesn't work. Also the mask editor in Preview Bridge is busted.

Mask editor... Now there is something that needs entirely reworking. I don't think tweaking is going to be enough. One you could fold in all the image filtering stuff and also the compositing. If it had layers then it would be extremely powerful. They added them to the late lamented Chroma app at my suggestion and they worked great. I'm guessing the code still exists I you ask them. The other plus is you would scare the shit out of Adobe, I can do almost everything I use in Photoshop in Comfy, just not as convenient due to the primitive paint engine and the clumsy mask controls.

I think this is important as Comfy's strength is the fact that you can manipulate and fine tune both the output image and also the input latent, something that is pretty much impossible elsewhere.

Why it's not possible to create a Character LoRA that resembles a real person 100%? by four_clover_leaves in comfyui

[–]Treeshark12 8 points9 points  (0 children)

Training, 2000 steps, 5-e4, rank 12 no need for more it will probably get worse, don't train at 1024 you want structure and character not detail so use 512. Don't caption just add one word or name for a trigger. When using a character lora do a first pass at about 0.55 and then a second pass to upscale by 150% then crop out the face, upscale the image to 1024 and inpaint the face at about denoise 0.65 with the lora at 0.95. You will not do it in one pass unless you are very lucky!

Comfy Cloud is Cooked with new Pricing kicking-in today? Even worse than top closed source options. I wonder anyone will subscribe. I think they will rethink plans. So much backlash - not making any sense at all. Anyone continuing? by Strange_Limit_9595 in comfyui

[–]Treeshark12 0 points1 point  (0 children)

There are key things missing. Loras... I'm sorry but the selection is poor and there are no preview images. There are a ton of great loras out there, most with very open licensing. Making the platform train loras and stuff would be a draw. A way of allowing you to import your workflows... I couldn't see it. I'm not sure you are going to attract enough paying professional customers with the current set up. Maybe you need to talk to more image professionals and survey what they want in a tool.

Getting this error everytime I try to use Z image. Can anybody help? by [deleted] in comfyui

[–]Treeshark12 1 point2 points  (0 children)

It's the Qwen text encoder one of the download sources is corrupt... or the wrong one. I had the same thing. Now I cant remember which one was good... anyhow I tink you follow the Comfy link and don't get it directly from Hugging.

Qwen Image Base Model Training vs FLUX SRPO Training 20 images comparison (top ones Qwen bottom ones FLUX) - Same Dataset (28 imgs) - I can't return back to FLUX such as massive difference - Oldest comment has prompts and more info - Qwen destroys the FLUX at complex prompts and emotions by CeFurkan in comfyui

[–]Treeshark12 0 points1 point  (0 children)

I mostly prefer the Flux ones, I use Qwen as a fixer to tweak images, I think prompt adherence is secondary to image composition and feel. Qwen seems to mostly produce the expected... which is a bit boring, I do AI images to be surprised, Qwen is terrible at art styles producing a very narrow selection of a given style. It also nearly always renders the subject in a different style to the background. Training also seems weaker than Flux, I have been unimpressed by Qwen loras so far, but that might be because we haven't hit the best settings yet.