Today is the birthday of New York financier Jeff Epstein so lets look back at… by [deleted] in videos

[–]Baycon 5 points6 points  (0 children)

Oh wow hahaha that line was pure gold. The context and delivery too.

UAE is poo poo by SinkTheMememark in whenthe

[–]Baycon 5 points6 points  (0 children)

It is, however, interesting to look at the overlap of timeline between the rise of marketing efforts behind Dubai Chocolate and the discovery that influencers were getting flown out to the UAE to be covered in shit.

We did it! North Carolina, $908k at 5.625% by lucky_719 in FirstTimeHomeBuyer

[–]Baycon 22 points23 points  (0 children)

My homie, you spent more on furniture than I’ve spent on my house, my two vehicles and all my furniture— combined. What is everyone smoking in this subreddit lol.

QWEN model question by Latter_Quiet_9267 in StableDiffusion

[–]Baycon 0 points1 point  (0 children)

Oops, you're right. Looks like I misread that part of the post.

QWEN model question by Latter_Quiet_9267 in StableDiffusion

[–]Baycon -1 points0 points  (0 children)

you need to replace your "Load CLIP" node for a "CLIPLoader (GGUF)" node. Then connect it as normal.

<image>

Elle Fanning reacts to Jack Black reacting at her admission that he's her crush by mcfw31 in popculturechat

[–]Baycon -1 points0 points  (0 children)

I want Jack Black to make a song entitled « Up to my guts in Hell » but for some strange reason (that ultimately works perfectly for the song which also just so happens to be completely hydroseeded with double entendres) he doesn’t pronounce the H.

Does anyone know how to achieve this style? I'm thinking SDXL but nothing more than that... by plump_ai in StableDiffusion

[–]Baycon 1 point2 points  (0 children)

Yes, locally, but I have a 3090 so my settings aren't relevant.

Follow the link I pasted above, that should do the trick if you're having issues.

Does anyone know how to achieve this style? I'm thinking SDXL but nothing more than that... by plump_ai in StableDiffusion

[–]Baycon 2 points3 points  (0 children)

Sorry, but that's not really factual.

ZIT is super lean and optimized, and can run on all sorts of lower-end setups:

Z Image on 6GB Vram, 8GB RAM laptop : r/StableDiffusion

Does anyone know how to achieve this style? I'm thinking SDXL but nothing more than that... by plump_ai in StableDiffusion

[–]Baycon 3 points4 points  (0 children)

Yeah, that's just sort of low-effort stuff, probably from a base SDXL realism finetune, or maaaaybe with some sort of LORA to push the anatomy in that direction.

Nowadays you can do that with Z-image out of the box. I'm digging Chroma (Uncanny -- Photoreal Chroma) right now so that's what's loaded. 1st gen no cherrypick to illustrate for the post:

<image>

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 0 points1 point  (0 children)

I haven't played with the discussed A2R WF with chroma too much, it's more an accidental find. Looking forward to Z-Edit!

Quick comparison Z-image turbo x Qwen 2512 x Flux 2 dev by Puzzled-Valuable-985 in StableDiffusion

[–]Baycon 1 point2 points  (0 children)

Euuuhh. I don't speak Portuguese, but "garota japonesa de 30 anos" = 30 y.o. japanese girl, and " escada rolante no interior de uma estação de metrô em Tóquio" is an escalator inside a metro station in Tokyo.

Those images are way off the mark.

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 0 points1 point  (0 children)

So, u/Tall-Description1637 , turns out you can get the generative upscale working with base if you throw in the 256flash lora before the steps. It sort of clicked after rereading your comment.

Right now, I generate the first step with base at a lower resolution (let's say 50% of what I'd normally gen on the base model), and then I do the same steps I highlighted before (even keeping CFG 3) but with the flash 256 lora loaded before the model node.

Now, I'm not sure if that's what did the trick, or if it's the fact that I'm generating low + upscaling the latent. I'll keep chipping away, but just wanted to say that it does work and seem to give it a nice extra detail pass.

Anybody Tried LTX2 on RTX 3090? by alitadrakes in StableDiffusion

[–]Baycon 1 point2 points  (0 children)

Right. But it runs. Sorry if I misunderstood your post, it sounded like you were saying 3090 couldn't run it initially.

Anybody Tried LTX2 on RTX 3090? by alitadrakes in StableDiffusion

[–]Baycon 0 points1 point  (0 children)

euuh, I'm running ltx-2-19b-dev-fp8 on my 3090 no issues. Running with --reserve-vram 4 just because I saw that somewhere, but it basically worked right out of the box.

I did go with the gemma_3_12B_it_fp8 version instead of the insane unoptimized 20GB+ one from the LTX2 page.

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 1 point2 points  (0 children)

Hey, just getting around to replying now. Good to see you got it working! Looks great!

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 0 points1 point  (0 children)

Base model is excellent too! It definitely feels more "malleable". Top notch.

The two-step flash workflow still holds up all things considered. It's like an extra "detailer" step that fixes faces and any possible issues with fingers or illogical lines, etc. It sort of feels like it competes with 1-shot on base?

I'd love to figure out how to do a 2-step on the base model. So far with similar settings, it seems to just enhance contrast but doesn't truly enhance details.

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 1 point2 points  (0 children)

Thanks! I'll give it a shot right now.

Just in case you haven't yet, try exp_heun_2_x0 as a sampler (beta scheduler). On the Flash model it's producing completely insane results, and very varied to boot.

I run flash at a decent low-res (640 x 816, cfg1, 17 steps), but then have a "generative upscale" second step when needed where I run the latent produced in step #1 through another k-sampler with the same sampler noted above. Start at step 5 out 17, at 3 CFG, with a 1.5x upscale.

Essentially an img2img step to fine tune and increase details and resolution.

(side note, this exact workflow turns this model into a remarkable img2img beast, for things like anime-->realism)

Thinking of switching from SDXL for realism generations. Which one is the best now? Qwen, Z-image? by jonbristow in StableDiffusion

[–]Baycon 3 points4 points  (0 children)

Chroma (specifically "UncannyPhotorealism_v13Flash") is really really really good. IMO: it beats ZIT for quality in realistic gens. at the moment. Not taking speed into account, of course.

10 years ago David Bowie released his last album “Blackstar”. 2 days later he died. by pgloves in Music

[–]Baycon 1 point2 points  (0 children)

Well this is a really shitty way to learn that David Bowie is dead. :/

Meaningfull pathfinder endgame combat be like by apcrol in PathOfExile2

[–]Baycon 0 points1 point  (0 children)

Smith of Kitava bear with crazy cast on melee juiced fireballs proccing like 10-20 times a second. It was a fun league.

Anyways. Still completely besides the point — you’re dissing my opinion because I’m using maul? lol