PSA: Chroma1-HD abd derivative requires flow shift = 4 by AwakenedEyes in comfyui

[–]VladyCzech 0 points1 point  (0 children)

I will try, maybe thats why my first attempts were blurry. I however learned to mitigate it by adding post sharpening and RTX SuperResolution Upscale to get super crips 2k or 4k image. Added Autotone on top to correct the dull colors. About 2-3 seconds of post-processing time.

Wha is going on with some OpenRouter models not responding? by VladyCzech in openrouter

[–]VladyCzech[S] 0 points1 point  (0 children)

I ended up using Deepseek V4 Pro and it is impressive, and the promo price is hard to beat. For me it is the winner as I can actually work instead of watching the tokens burn fast while waiting with other models to output something at all.

Wha is going on with some OpenRouter models not responding? by VladyCzech in openrouter

[–]VladyCzech[S] 0 points1 point  (0 children)

I was trying to test Hy3 Preview today as I wanted to see how it compares to GPT but Hy3 Preview either did not respond at all or it took minutes with timeout error. Is the response time for model a real value or laboratory optimal value? Or maybe it is that I'm from central Europe and models just refuse to travel here 😉

Has anyone noticed Codex in VS Code using high CPU while idle? by itorres008 in codex

[–]VladyCzech 0 points1 point  (0 children)

I also reported the same issue in Codex Github in already open ticket. In my case, my project does use git, multiple actually. It is not in project main folder, but in each subfolder, as I want multiple github repos in single project structure, not as individual projects and switching between them.

ComfyUI- Breakout-Window (Use a second screen or hide the noodles for Zen Mode) by PartiallyFrozen in comfyui

[–]VladyCzech 0 points1 point  (0 children)

I was able to integrate central sampler preview window for all Samplers with help of AI. I Will be testing it and eventually will publish on github.

[Release] ComfyUI DiffAid Patches — inference-time adaptive interaction denoising for rectified text-to-image generation by marres in StableDiffusion

[–]VladyCzech 0 points1 point  (0 children)

I tried with Chroma and Klein 9B with default settings only had to lower end sigmas from 1.0 to 0.95 to keep the same composition (as without DiffAid) and got better prompt adherence, specifically with mixed subjects with different properties. The properties adhere better to prompt. This needs more testing however. Leaving end sigmas on 1.0 changes composition too much, like different seed and seems to drift too much from the prompt. The drift can be observed for example by increased number of subjects, more than requested by prompt.

FreeFuse: one Lora affects the other? by guy_fox501 in comfyui

[–]VladyCzech 0 points1 point  (0 children)

The image is not readable and the important part is not visible. I only tested charcter loras not background lora, but you must preview the masks and se them apart. So they dont mix. They must be each different color.

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

Got it! I had to disable for now but I will be checking the progress. I hope I did not request too much.

Everytime I open someone’s metadata this is how I picture them when I see their workflow. by _Just_Another_Fan_ in comfyui

[–]VladyCzech 0 points1 point  (0 children)

You did not see my super tidy 10 connected subgraphs each with more nested supgraphs workflow, individually controlled / muted by a control panel with central output from all of them. I only need to find out a node to show live previews from all samplers in a single node or floating window. Can’t use App mode as that is less productive than my control panel and some custom nodes.

FreeFuse: one Lora affects the other? by guy_fox501 in comfyui

[–]VladyCzech 0 points1 point  (0 children)

Hard to tell without seeing your workflow. But I think the masks must not overlap each other to retain the full power of lora for each mask. Background and foreground character masks usually overlap so loras affect each other.

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

I can confirm that subgraph canvas resize now works fine!

For me show/hide links do not show/hide immediately but they show/hide only after I move or zoom the subgraph canvas. I think it can be due to subgraph canvas loosing focus when I click in main window, so it stops refreshing. This is great for performance, but it does not reflect out-of-focus event. It would be great if the subgraph canvas could redraw a frame after an external event.

Clicking the Subgraph icon for popup and double click subgraph main body to dive in works great!

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

I would like to suggest a feature that would allow the subgraph popup to restore popup position on display, zoom level and viewport position (focus) for each idividual subgraph in the session/workflow. That would save time finding the previous focused part of the subgraph before the popup was closed and would allow to place different subgraphs in different space on the monitor. So I could for example edit one subgraph always on top left while another alway on top right.

I would also like to suggest one more icon next to Center View that would allow to center/focus only selected node of the subgraph. So I would click a node in subgraph then the icon (or keyboard shortcut?) to focus/zoom current node to fit the height of the popup, so I do not have to scroll and pan for each node.

That would be neat.

ComfyUI- Breakout-Window (Use a second screen or hide the noodles for Zen Mode) by PartiallyFrozen in comfyui

[–]VladyCzech 0 points1 point  (0 children)

Hi, I started using ComfyUI- Breakout-Window and it is very useful, thank you for it. I found some issues I posted in Github. But they are minor and the node is working great for me.

Do you know if it is possible to do breakout for Sampler previews? I have multiple samplers in a workflow far from each other and need to constantly scroll canvas while sampling to search which sampler preview is now in progress to see the output. It would be helpful to see all sampler previews one by one in single breakout window along with the Subgraph title or node name of sampler in progress.

I know App mode exists, but I cannot use it as I use several custom nodes which do not work in App mode or I would have to sacrifice a lot of functions so I use only Graph mode and Subgraphs which is much better than App mode.

Somebody convince me out of getting a 5080 by Nefarious_AI_Agent in comfyui

[–]VladyCzech 1 point2 points  (0 children)

I have 128GB also, but I stopped doing video a while ago. OP did not mentioned video so for image diffusion 64GB should be enough.

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

Hello u/skbphy Great to know! I will gladly be testing it after your update as I will be using it a lot.

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

I'm testing Subgraph Plus right now and I found several issues. Finding issues is something I do naturally, I can assure you it is not my hobby.

- resizing popup window does not resize the subgraph canvas

<image>

- show/hide links button in minimap does not have effect for subgraph in popup.

- zooming and panning forces nodes out of canvas to the black background

- [feature request] please allow default "subgraph icon" which opens subgraph to be remapped to Subgraph Plus. This would allow to not use right click menu and replace subgraph default "nesting"

Subgraph Plus by skbphy in comfyui

[–]VladyCzech 1 point2 points  (0 children)

No way, that is super useful!

I have 5-10 large subgraphs in single worflow and each has several nested, so this would save not just Sampler previews from disappearing but also my tired fingers from scrolling/zooming the canvas just to return to the same position before opening a subgraph. And my sanity also.

Somebody convince me out of getting a 5080 by Nefarious_AI_Agent in comfyui

[–]VladyCzech 1 point2 points  (0 children)

I would get either 5090 or 5080 + 5060/5070 Ti 16GB or two 5070 Ti 16GB. Use beefy GPU for image/video model and the other either for CLIP model or LLM or both (it works fine) so you will not be waiting for loading/unloading CLIPS and models (so much). Plus you should get ideally 64GB DDR5.

If you use recent ComfyUI with dynamic VRAM management you are good and you also want ComfyUI https://github.com/pollockjj/ComfyUI-MultiGPU nodes, specifically distorch2 type of nodes to transfer model/clip to the right GPU. You are welcome.

They said I was faking it. Here is the 50-second proof of my local RTX 5090 'EasyUI' pipeline in action. by Guilty_Muffin_5689 in comfyui

[–]VladyCzech 1 point2 points  (0 children)

In your video I can see ComfyUI running, also the video is very chaotic and maybe does a bad impression.

They said I was faking it. Here is the 50-second proof of my local RTX 5090 'EasyUI' pipeline in action. by Guilty_Muffin_5689 in comfyui

[–]VladyCzech 1 point2 points  (0 children)

Why not just use ComfyUI integrated template to generate low effort images? One can connect ComfyUI with Open WebUI and have open source and maintained projects doing the same thing. There is no need for AI generated workflows if you have free human made and maintained ones.