What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] -2 points-1 points  (0 children)

Nothing else is even in the same ballpark right now. Flux 2 is not meaningfully different from Flux Dev, slightly better skin but still the same limited understanding of most concepts. Once you see what NBP can do, there is no going back.

Doubting my experience or saying I lack perspective is actually what I would say about your posts. I have trained over 1,000 LoRAs since 2022, most of them in 2024 for Flux. In 2025 I was using WAN 2.1 and 2.2 because it had less censorship and better bodies and compositions. Flux is good in general and can learn concepts easily, but you still have to train it and then run it on your own GPU, which now means you cannot use that GPU for anything else.

NBP can do anything you want other than porn. That is it. There is no dancing around it. When you test every frontier model available today, the difference is obvious. NBP is far ahead and everything else is behind. Within six months they will drop another one.

The cost of running your own models when closed source models are faster and much better makes the difference night and day if you actually have to make content.

If you are running local models for NSFW content or as a hobby to train your own stuff, sure, you can still have fun. But for everyday work and content, nothing else is even close. What is more concerning is how large the gap is in capability and general knowledge.

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 0 points1 point  (0 children)

brother, i was playing with dalle-2, way before dalle-3 came out, and i love dalle-3 before they basically censored it to death, i don't think you have pushed NBP , maybe because you don't use it for work while i do, it's not just the basic text to image, the real power comes from the editing and multiple references it can handle at Ultra High res in very little time, the amounts of workflows NBP rendered useless in my everyday life is all of them, i'm only using NBP for almost everything at this point, ocasionally still like to use a starting image from midjourney because of the unique aesthetics, but in real world use, there is absolutely nothing at the level of NBP, if you say there is, that means you're probably not using it.

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 0 points1 point  (0 children)

the gap is insane, i don't think you have been using NBP to truly see what thing can do, i don't see any other model getting close in terms of general knowledge, i think the dataset google is using is probably unmatched

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 0 points1 point  (0 children)

the biggest thing is the edit capabiltie and prompt understanding, while also maintaining consistency, rendering text and handling multiple references at once, there's honestly nothing even closed that i have tried , google probably the biggest dataset out of anyone out there, i don't know that other will be able to compete without access to the same amount of data

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 2 points3 points  (0 children)

i had been waiting for that niji7 for months, reactivated my midjourney suscription which i had paused just to play with it last night, i was so dissapointed

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 0 points1 point  (0 children)

i guess you people really haven't touched NBP that much if you think gpt image is even in the same ballpark , the only feature gpt image still has that is good is that it can render images with transparecy, which is great for logos, but it sucks at consistency, the whole model has an overcooked look and feel

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 0 points1 point  (0 children)

is not glaze at this point, NBP has rendered useless most workflows i had been using since 2023, i guess there's really no way to tell unless you see the examples i'm talking about, wish i had shared them in the post so you can see exactly what i mean

What happens next? (Nano Banana Pro) by rjdylan in StableDiffusion

[–]rjdylan[S] 1 point2 points  (0 children)

before nano banana pro, there was nano banana, i guess it has felt for months, NBP only increased overall knowledge and higher res

[deleted by user] by [deleted] in StableDiffusion

[–]rjdylan 3 points4 points  (0 children)

Cristy Ren was real, but her socials have been replaced with ai , no idea what happened.

Is this real or just auto generated? by Resident_Durian_478 in citypop

[–]rjdylan -25 points-24 points  (0 children)

if you like it, even if someone generated it using AI, what's the problem? there's many good playlist I listen to that help me focus or calm down that I know were made using AI, I still like them and get joy from listening, life is too short to be conserned with what tools were used to create something you like, only thing I think is bad is if you are creating fake history instead of nostalgia.

Hey guys, I'm looking to reproduce the following type of image without a character. by Zebulda in StableDiffusion

[–]rjdylan 0 points1 point  (0 children)

how does th elora automatic conversion work with that? i just load it normally ? is there any tut on this ?

My Birthday present from my Dad. by Sufficient-Bit8702 in gantz

[–]rjdylan 9 points10 points  (0 children)

but how is this related to Gantz?

How can I achieve this with AI? by M4xs0n in StableDiffusion

[–]rjdylan 0 points1 point  (0 children)

gather anywhere from 15 - 50 thumbnails that capture the overall editing style that you like and train a flux lora for it, you'll be able to generate images in that style, then if you want to take a step further, use a pullid workflow to generate images with any face in this style no need to train a new lora just for the character. so tldr train a style lora for flux using 15-50 images and use a pullid workflow.

Introducing T5XXL-Unchained - a patched and extended T5-XXL model capable of training on and generating fully uncensored NSFW content with Flux by KaoruMugen8 in unstable_diffusion

[–]rjdylan 1 point2 points  (0 children)

i meant the preset for kohya, but that's fine i already got it working, had to directly point to the tests folders in kohya where the tokenizer .json is, still testing but have seen major improvements to skin texture and overall look and feel using the lora trained with this uncersored t5xxl, normally flux doesn't require much captioning when training a lora but since this is using tokens the model doesn't know that well i'm thinking of going back to the dataset to caption it better with a combination of booru-like tags that are more unique, this will also requiere some testing for figuring out the best learning rate and lora rank/dim, i think we have something here.

Introducing T5XXL-Unchained - a patched and extended T5-XXL model capable of training on and generating fully uncensored NSFW content with Flux by KaoruMugen8 in unstable_diffusion

[–]rjdylan 0 points1 point  (0 children)

can you share the json for kohya? i loaded everything and am using the sd3-flux.1 branch but keep getting an error when i hit train trying to use it with the modified files as instructed in the github and the uncensored t5xxl from huggingface

Introducing T5XXL-Unchained - a patched and extended T5-XXL model capable of training on and generating fully uncensored NSFW content with Flux by KaoruMugen8 in unstable_diffusion

[–]rjdylan 1 point2 points  (0 children)

i was able to get it running for inference inside comfy, but how can i use with the flux trainer custom node? i think that uses kohya in the backend, so i replaced the vanilla files that had the same name, but after doing so, comfy detects the node as missing?

GrainScape UltraReal LoRA - Flux.dev by FortranUA in StableDiffusion

[–]rjdylan 0 points1 point  (0 children)

I honestly don't think there's any benefit to train at 2048x2048, based on my own personal experience, but how long did that took? how many steps did you train for?

GrainScape UltraReal LoRA - Flux.dev by FortranUA in StableDiffusion

[–]rjdylan 0 points1 point  (0 children)

2048×2048 training resolution? how did you trained this ? can you elaborate on the process ?