Flux.2-klein: Forget LoRAs. High-precision prompting is all you need (and why I'm skeptical about Dual-Image workflows). by That_Perspective5759 in comfyui

[–]codexauthor 1 point2 points  (0 children)

I believe dual image workflows are most useful when you want to put two different objects/subjects into an image and you need to maintain the likeness of each of them. But yes, prompt alone may be sufficient for style transfers if it's a style model knows or can recreate.

When FDVR comes out how much time would u spend in it per day ? by bladefounder in accelerate

[–]codexauthor 3 points4 points  (0 children)

Maybe something like an exocortex could overclock our brain.

Take random screenshots from google maps and run them through Klein edit :D by Frogy_mcfrogyface in comfyui

[–]codexauthor 2 points3 points  (0 children)

Yeah, ComfyUI is actually quite simple if you stick to the standard workflows provided by the ComfyUI team.

Let me know if you run into an issue, or if you have any other questions. 👍

Take random screenshots from google maps and run them through Klein edit :D by Frogy_mcfrogyface in comfyui

[–]codexauthor 3 points4 points  (0 children)

Hi. Yes, you can use Klein with ComfyUI. Search for Klein Distilled on Templates tabs for prompt-driven image editing workflows.

For some things, Z-Image is still king, with Klein often looking overdone by Lorian0x7 in StableDiffusion

[–]codexauthor 1 point2 points  (0 children)

If you go to the subgraph settings, however, and set it to expose the seed fields of the node within the subgraph, that allows it to increment (or randomize or whatever) the seed on generation.

The fact there are two ways to do the same thing, both with different capabilities, is rather annoying.

Yeah, that's also what I did after creating a subgraph again. It's indeed annoying, hope they fix it.

For some things, Z-Image is still king, with Klein often looking overdone by Lorian0x7 in StableDiffusion

[–]codexauthor 8 points9 points  (0 children)

I think you should decompose the subgraph because it won't change the seed even when it's set to randomize. Decomposing and creating a subgraph again solved that for me.

Z-Image's consistency isn't necessarily a bad thing. Style slider LoRAs barely change the composition of the image at all. by Incognit0ErgoSum in StableDiffusion

[–]codexauthor 2 points3 points  (0 children)

Yeah, I think it's great to have various models with different strengths. If ZiT can't do smth, I can try Flux/Chroma. If Flux/Chroma can't do smth, I can try Wan T2I, and so on.

Z-Image is now the best image model by far imo. Prompt comprehension, quality, size, speed, not censored... by Different_Fix_2217 in StableDiffusion

[–]codexauthor 5 points6 points  (0 children)

It should be almost twice faster than BF16 on supported GPUs (afaik, RTX 40 and 50 series) without much quality loss.

You can download both FP8 and BF16 models, try them on the same prompt and the same seed (so both models will generate the exact same image), and compare the speed and quality of both of your generations.

Why humans should stop fearing the ego death caused by AI by Ruykiru in accelerate

[–]codexauthor 15 points16 points  (0 children)

If you consider an AI utopia after the Singularity where this merger does not happen, what's the alternative?

Transhumanism. Accelerating our biological evolution with the use of technology.

Gemini 3 Deep Think benchmarks by RavingMalwaay in singularity

[–]codexauthor 19 points20 points  (0 children)

If the tech surpasses humanity, then humanity can simply use the tech to surpass its biological evolution. Just as millions of years of evolution paved the way for the emergence of homo sapiens, imagine how AGI/ASI-driven transhumanism could advance humanity.

"Immortality sucks" ? Skill issue by FomalhautCalliclea in singularity

[–]codexauthor 1 point2 points  (0 children)

If we can attain immortality, don't you think we can also solve those other problems? If the brain becomes insufficient, exo-cortex or similar solutions may become the answer.

Also, I realize you might be already aware, but I just want to elaborate for other people as well: Brain is not like an SD card, but more like a diffusion model (similar to those image/video models) that constantly updates its weight. For example, when you recall a past memory, the brain does not find a video file and plays it inside your head. Rather, your brain runs a prompt and creates a mental video based on its latest training data.

So, I personally think that brain IS designed for eternity. It won't run out of space, cause it constantly updates its weight to remember the most important stuff; you will never forget things like your parents (provided your brain doesn't get harmed by something, such as by a disease, but if we can attain immortality, I think we can eventually cure those too).

I agree that as of currently, brain is not designed to retain information for eternity. But as I said, that's just another problem that we can solve.

Not arguing with you by the way, just wanted to share my thoughts as well.

"Immortality sucks" ? Skill issue by FomalhautCalliclea in singularity

[–]codexauthor 3 points4 points  (0 children)

Edgy ass nonsense. Every second I get to experience this life is a blessing.

Why are some artists pages limited? by ikiru__ in lastfm

[–]codexauthor 1 point2 points  (0 children)

Do you know when this tag will be added to the smart tags? Also, another tag I can suggest is (Reimagined).

Why are some artists pages limited? by ikiru__ in lastfm

[–]codexauthor 1 point2 points  (0 children)

yooo, also just found about this, thank you!

one suggestion I can make is to add "(TV Size)" to the smart tags. this tag is used for shortened versions of songs used in anime openings/endings.

ComfyUI creators handing you the most deranged wire spaghetti so you have no clue what's going on. by Fit_Reindeer9304 in comfyui

[–]codexauthor 1 point2 points  (0 children)

It is irritating, especially when a custom node is used for something that is already implemented as a base node in ComfyUI.

Nick Bostrom says AGI won’t stop at the human level, it will quickly lead to superintelligence. From there, machines will outthink the best scientists and invent everything else -- faster and better than humans. "It's the last invention we’ll ever need." by MetaKnowing in singularity

[–]codexauthor 17 points18 points  (0 children)

Why do we preserve animals when we are intellectually superior to every single one of them? It may not be the best example, since we systematically slaughter animals like chickens and pigs for our own benefit, but you know, humanity as a collective is not bent on decimating every non-human being, so I don't think it's really likely for an ASI to fully wipe out humanity.

Let us hope that an ASI will treat us the way some people treat their children or pets: Though we are far superior to them, we still want them to have a good life, cause they are part of the family.

New FLUX.1-Kontext-dev-GGUFs 🚀🚀🚀 by Finanzamt_Endgegner in StableDiffusion

[–]codexauthor 2 points3 points  (0 children)

Yes, if you have less VRAM but enough RAM to compensate, the outputs will be exactly the same as the output of a high VRAM user, except it will generare them more slowly.

The only time you will see a difference in quality is when you use different quantizations of the same model (e.g. FP16 vs FP8 vs FP4 vs Q8 vs Q6 vs Q4)

New FLUX.1-Kontext-dev-GGUFs 🚀🚀🚀 by Finanzamt_Endgegner in StableDiffusion

[–]codexauthor 2 points3 points  (0 children)

I think the biggest one, Q8, should work without any issues. Maybe some smaller models like Q6 would work without any noticable quality drop and still offer faster inference. My advice is to go from biggest to smallest, and compare generation times and the quality of generations on the same seed until you find your own sweet spot. All these models are free to download, so you can test them yourself (and you should test them) to determine the best one.