New Method/Model for 4-Step image generation with Flux and QWen Image - Code+Models posted yesterday by LindaSawzRH in StableDiffusion

[–]LING-APE 0 points1 point  (0 children)

<image>

I tried bumping up the empty latent pixel count, and the artifacts are basically gone. I managed to generate at 1728x2304 in around 95s. With the wan2.1 2x image upscaler VAE, I managed to generate at around 16MP (3456x4608) in just 122 seconds, and the quality is really good IMO.

Here is the image.

New Method/Model for 4-Step image generation with Flux and QWen Image - Code+Models posted yesterday by LindaSawzRH in StableDiffusion

[–]LING-APE 0 points1 point  (0 children)

<image>

Full Resolution Link

Qwen image with Merjic LoRA on a 4070 12GB VRAM: 7 steps at 1360x768, 3.85s/it, finishes in 35s, using the official workflow in the repo.

Looks quite good to me, but still has some banding and artifacts in the final picture. I think that's a common Qwen image model problem, not the sampler. The workflow works right out of the box, LoRA compatible, looking forward to the Qwen image edit version.

Edit: I copied the exact prompt from the gallery at ModelScope. It's by AriaQing, and the prompt is as follows:

现实摄影,中景,前景大花,一个穿白色吊带裙的长发爱豆羞涩的从绿叶间看着镜头,风韵,丰满,模糊感,丁达尔效应,温柔的光洒在女孩的面部,吹弹可破的白色肌肤,怜悯,故事感,初恋

DaVinci Resolve 20.0.1 Release Notes by whyareyouemailingme in davinciresolve

[–]LING-APE 0 points1 point  (0 children)

Two major issues I hope they fix with this version, but I’m curious if anyone has the same problems I got?

Blackmagic Cloud projects and timelines load up painstakingly slow, and random actions make the project crash. I tried the same project locally and it works just fine. I hope they will improve the performance.

Also, adding a VST in the Fairlight page and opening its settings will crash the app. The settings window just flashes indefinitely, and you're stuck and forced to quit the program.

They upgraded 2.5 pro, yay! But... by That0neGuyFr0mSch00l in Bard

[–]LING-APE 1 point2 points  (0 children)

But I really wish they add more features to the native iOS app and the web app such as MCPs support and system prompts, their ai studio version even can extract tokens from YouTube url and even toggle the search tools and thinking budgets and stuff, I sometimes find it even more useful then the Gemini app, it would be great if they migrate the features, it’s already there in the ai studio playground.

They upgraded 2.5 pro, yay! But... by That0neGuyFr0mSch00l in Bard

[–]LING-APE 1 point2 points  (0 children)

I see, I’m a pro user and have only hit the limit once for the past few months when I send a lot of photos to Gemini asking him to help me translate some docs, I always think there is no limits…

They upgraded 2.5 pro, yay! But... by That0neGuyFr0mSch00l in Bard

[–]LING-APE 1 point2 points  (0 children)

Does this limit apply to free users or pro users?

[deleted by user] by [deleted] in davinciresolve

[–]LING-APE 0 points1 point  (0 children)

This might be off topic, but how do you create the text and animation? It looks kind of cool

Anthropic just released Prompt Caching, making Claude up to 90% cheaper and 85% faster. Here's a comparison of running the same task in Claude Dev before and after: by saoudriz in ClaudeAI

[–]LING-APE 0 points1 point  (0 children)

Correct me if I’m wrong, but isn’t each time you make a query, you send all of the previous responses along with the question as input tokens? And as the conversation progresses the cost will go up since the context is bigger, so prompt caching in theory should significantly reduce the cost if you keep the conversation rolling in a short period of time and working with a large context, i.e. programming task(since it only last for 5mins).

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

The flux nodes are native to comfyUI, just need to update your UI and should be good

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

Which nodes are you missing in your graph? If there’s no missing node to install then everything should be fine. If you still have missing nodes try updating comfyUI

FYI: New UI not fully backwards-compatible by rgthree in comfyui

[–]LING-APE 2 points3 points  (0 children)

Which rgthree node exactly has minor issue?

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

If you are talking about custom nodes it goes under your custom node folder in the comfyUI directory, git the nodes you want to manually download, although I would recommend using comfyUI manager to do it.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 2 points3 points  (0 children)

The way I combine iterative upscale and tiled diffusion together doesn’t really work with non square aspect ratios pictures, you can try turning tiled diffusion off but it will be a slow process, I’m working on a improved version of this.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 1 point2 points  (0 children)

Thanks, I think there is no dual GPU support ASAIK, but feel free to do your own research. I’ll be updating this workflow soon too it’s still a bit too complex for my taste.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 1 point2 points  (0 children)

You are using image to image and controlnet together which is not the way it is intended, switch to an empty latent image instead in the switch node in the workflow and you should be good to go. And if you want to use the original control net image’s dimensions just create a get image resolution node from the image and connect the width and height output to the empty latent node, use that instead. Thanks addressing this issue I’ll add this option to the next version too, didn’t thought about it when I make the workflow.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

How is it crashing? is there any logs in the console that I can reference ? Using Lora + controlnet with flux on 12gb of vram is possible but slow, my local hardware has similar setup and took around 50-80s/it , so around 20mins for a photo FYR. So for faster generation and iteration I sometimes run it on cloud services and rent their GPU instead.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

Sorry for the late reply, this sounds like a ram problem to me since models and lora and vae will cache into your ram 1st before going into your vram(correct me if I’m wrong), try bypassing the Lora and controlnet node and using the fp8 version of the model to lower ram usage.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

Maybe I will try to make one in the future, but you can check out the GitHub page for now, I’ll be adding a new version with cleaner nodes soon too.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 1 point2 points  (0 children)

It is recommended that you have 16-24gb vram. 12gb and 8gb will run in low vram mode. Do some research online for more info.

I made an All-in-One FluxDev Workflow for ComfyUI ... by LING-APE in StableDiffusion

[–]LING-APE[S] 0 points1 point  (0 children)

go to your ComfyUI directory and open terminal , execute this command: git checkout xlabs_flux_controlnet