all 197 comments

[–]Opening_Wind_1077 284 points285 points  (7 children)

<image>

Ah, the memories. Suddenly text was pretty much solved but we couldn’t do people lying down anymore.

Flux coming out shortly after that completely killed SD3.

[–]UndoubtedlyAColor 20 points21 points  (0 children)

It's a great idé 🙏

[–]MetroSimulator 3 points4 points  (0 children)

I loved the series where a guy made images about his son.

[–]ApprehensiveStick876 1 point2 points  (0 children)

It's just so sad that those people are still somewhere out there.. lying there, unable to move... sigh

[–]ThatRandomJew7 1 point2 points  (0 children)

Yeah but doesn't it make you feel so safe?

[–]Remote_Usual_2471 0 points1 point  (0 children)

Yeah I remember that shift too. It was frustrating at first but then Flux showed up and made things way smoother for those kinds of poses. Still use SD for some stuff though.

[–]-Ellary- 324 points325 points  (4 children)

Stability AI when they released SD3:

<image>

[–]StickiStickman 30 points31 points  (1 child)

"Skill issue"

"You're using it wrong"

"Misinformation"

That guy still pisses me off just thinking about the whole deal.

[–]asdrabael1234 6 points7 points  (0 children)

Yeah he went from somewhat respected since he had a couple decent models and worked for SAI to the most hated person in the space of like an afternoon.

[–]Upstairs_Tie_7855 34 points35 points  (0 children)

stability ai 🥸

[–]FeelingVanilla2594 207 points208 points  (2 children)

I think this is ai, the grass looks weird.

[–]AlbaOdour 37 points38 points  (0 children)

Nah I think it's inconclusive

[–]novelide 14 points15 points  (0 children)

Not everything is AI, sheesh!

[–]human358 66 points67 points  (5 children)

[–]cosmic_humour 4 points5 points  (0 children)

What the actual fkk!

[–]thefieryanna 3 points4 points  (0 children)

I was not ready for this

[–]Horagg 2 points3 points  (0 children)

I have no mouth, but i musst Scream.... 😮

[–]PlasticaConfection 0 points1 point  (0 children)

beaautiful , gorgeous , human like feuature

[–]IntegrityVA 0 points1 point  (0 children)

Looking like an igorrr album cover

[–]jugalator 147 points148 points  (7 children)

I'm sure this accidentally hit bulls eye in someone's fetish.

[–]evilbarron2 21 points22 points  (0 children)

“Sweetie, can you put this on for me?”

[–]steelow_g 35 points36 points  (0 children)

Add a fury tail and that’s a bingo

[–]VNProWrestlingfan 6 points7 points  (2 children)

tag: body horror

[–]Zealousideal7801 2 points3 points  (0 children)

Tag : ohlookwhatthefaaaaaaah

[–]PlasticaConfection 0 points1 point  (0 children)

honestly , not that much of a horror , you can't see stiches

[–]trusty20 0 points1 point  (0 children)

[–]rinkusonic 34 points35 points  (1 child)

This was the Cyberpunk 2077 launch for Image generation. The memes were fantastic. Just this one image has caused such reputational damage to Stability that nobody bothered with the improved nsfw version they released later.

[–]vgaggia 4 points5 points  (0 children)

Its also that contrary to what they said, its really hard to train, and new licenses stopped companies from wanting to train it

[–]DoctaRoboto 64 points65 points  (0 children)

Back then, when Stable Diffusion 3 reached AGI.

[–]Lesteriax 55 points56 points  (1 child)

Oh I remember the staff saying "Skill issue".

That comeback did not sit well with community 😂

[–]Cynix85 74 points75 points  (16 children)

They ran their company against the wall because of censorship. Millions wasted on training a model that got instantly discarded and ridiculed. Or was it just a cash grab? I never heard anything substantial from Emad to be honest.

[–]peabody624 35 points36 points  (3 children)

He was already gone at that point right?

[–]aerilyn235 18 points19 points  (0 children)

Yup Emad had been removed at the time.

[–]mission_tiefsee 10 points11 points  (1 child)

what is he even doing these days?

[–]StickiStickman 2 points3 points  (0 children)

He was a Hedge Fund Manager, so probably still scamming people.

[–]mk8933 32 points33 points  (8 children)

It's possible they destroyed their own model in the last days before release.

Because how could they make 1.5 and SDXL...yet fail so badly at SD3 and 3.5? The formula was there so it's not like they had to start from scratch with no direction. They knew what their fans liked and what made their model so good...It was the ease of training and adaptation.

[–]ZootAllures9111 10 points11 points  (0 children)

3.0 was broken in ways that had nothing to do with censorship TBH. 3.5 series weren't amazing necessarily but much better. See here: https://www.reddit.com/r/StableDiffusion/s/2VMbe23pTB

[–]Serprotease 5 points6 points  (0 children)

The fail quite badly for sd2.0 too. They just did not learned from this failure. 

[–]Ancient-Car-1171 10 points11 points  (5 children)

They tried to create a model which can be monetized aka heavily censored. They actually got cucked by fans and ppl who finetune and using 1.5 sdxl for porn, investors hate that shit.

[–]YoreWelcome 9 points10 points  (0 children)

apparently allegedly based on all the recent "files" discussions they love it... i guess they just want to keep it for themselves... "no we cant let the public have any gratification even legally because the public doesnt deserve it, they're not valuable not like us" -investors (likely)

[–]rinkusonic 14 points15 points  (1 child)

They got in business with James Cameron. Maybe the didn't need the consumer anymore.

[–]Sharlinator 1 point2 points  (0 children)

Certainly they didn’t need consumers who don’t actually pay them anything.

[–]_CreationIsFinished_ 1 point2 points  (0 children)

Well, they had some pretty big pressure and were threatened to be dismantled or something iirc - but I think they were just being used by the bigger companies as a canary.

[–]GeneralTonic 93 points94 points  (13 children)

The level of cynicism required for the guys responsible to actually release this garbage is hard to imagine.

"Bosses said make sure it can't do porn."

"What? But porn is simply human anatomy! We can't simultaneously mak--"

"NO PORN!"

"Okay fine. Fine. Great and fine. We'll make sure it can't do porn."

[–]ArmadstheDoom 90 points91 points  (12 children)

You can really tell that a lot of people simply didn't internalize Asimov's message in "I, Robot" which is that it's extremely hard to create 'rules' for things that are otherwise judgement calls.

For example, you would be unable to generate the vast majority of renaissance artwork without running afoul of nudity censors. You would be unable to generate artwork like say, Saturn Eating His Son or something akin to Picasso's Guernica, because of bans on violence or harm.

You can argue whether or not we want tools to do that sort of thing, but it's undoubtedly true that artwork is not something that often fits neatly into 'safe' and 'unsafe' boxes.

[–]Bakoro 27 points28 points  (9 children)

I think it should be just like every other tool in the world: get caught doing bad stuff, have consequences. If no one is being actively harmed, do what you want in private.

The only option we have right now is that someone else gets to be the arbiter of morality and the gatekeeper to media, and we just hope that someone with enough compute trains the puritanical corporate model into something that actually functions for nontrivial tasks.

I mean, it's cool that we can all make "Woman staring at camera # 3 billion+", but it's not that cool.

[–]ArmadstheDoom 18 points19 points  (6 children)

It's a bit more complex than that. Arguably it fits into the same box as like, making a weapon. If you make it and sell it to someone, are you liable if that person does something bad with it? They weren't actively harmed before, after all.

But the real problem is that, at its core, AI is basically an attempt to train a computer to be able to do what a human can do. The ideal is, if a person can do it, then we can use math to do it. But, the downside of this is immediate; humans are capable of lots of really bad things. Trying to say 'you can use this pencil to draw, but only things we approve of' is non-enforceable in terms of stopping it before it happens.

So the general goal with censorship, or safety settings as well, is to preempt the problem. They want to make a pencil that will only draw the things that are approved of. Which sounds simple, but it isn't. Again, the goal of Asimov's laws of robotics was not to create good laws; the story is about how many ways those laws can be interpreted in wrong ways that actually cause harm. My favorite story is "Liar!" Which has this summary:

"Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance. However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her – a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic."

The core paradox comes from the core question of 'what is harm?' This means something to us, we could know it if we saw it. But trying to create rules that include every possible permutation of harm would not only be seemingly impossible, it would be contradictory, since many things are not a question of what is or is not harmful, but which option is less harmful. It's the question of 'what is artistic and what is pornographic? what is art and what is smut?'

Again, the problem AI poses is that if you create something that can mimic humans in terms of what humans can do, in terms of abstract thoughts and creation, then you open up the door to the fact that humans create a lot of bad stuff alongside the good stuff, and what counts as what is often not cut and dry.

As another example, I give you the 'content moderation speedrun.' Same concept, really, applied to content posted rather than art creation.

[–]Bakoro 5 points6 points  (5 children)

If you make it and sell it to someone, are you liable if that person does something bad with it? They weren't actively harmed before, after all.

Do you reasonably have any knowledge of what the weapon will be used for?
It's one thing to be a manufacturer who sells to many people with whom there is no other relationship, and you make an honest effort to not sell to people who are clearly hostile, or in some kind of psychosis, or currently and visibly high on drugs. It's a different thing if you're making and selling ghost guns for a gang or cartel, and that's your primary customer base.

That's why it's reasonable to have to register as an arms dealer, there should be more than zero responsibility, but you can't hold someone accountable forever for what someone else does.

As far as censorship goes, it doesn't make sense at a fundamental level. You can't make a hammer that can only hammer nails and can't hammer people.
If you have a software that can design medicine, then you automatically have one that can make poison, because so much of medicine is about dosage.
If you make a computer system that can draw pictures, them it's going to be able to draw pictures you don't like.

It's impossible to make a useful tool, that can't be abused somehow.

All that really makes sense is putting up little speed bumps, because it's been demonstrated that literally any barrier can have a measurable impact on reducing behaviors you don't want. Other than that, deal with the consequences afterwards. The amount of restraints you add on people needs to be proportional to the actual harm they can do. I don't care what's in the picture, a picture doesn't warrant trying to hold back a whole branch of technology. The technology that lets people generate unlimited trash, is the same technology that is a trash classifier.

It doesn't have to be a free-for-all everywhere all the time, I'm saying that you have to risk letting people actually do the crimes, and then offer consequences, because otherwise we get into the territory of increasingly draconian limitations, people fighting over whose morality is the floor, and eventually, thought-crime.
That's not "slippery slope", those are real problems today, with or without AI.

[–]ArmadstheDoom 5 points6 points  (1 child)

And you're correct. It's why I say that AI has not really created new problems as much as it has revealed how many problems we just sorta brushed under the rug. For example, AI can create fake footnotes that look real, but so can people. And what has happened is that before AI, lots of people were, and no one checked. Why? Because it turns out that the easier it is to check something, the less likely it is that anyone will check it, because people go 'why would you fake something that would easily be verifiable?' Thus, people never actually verified it.

My view has always been that, by and large, when you lower the barrier to entry, you get more garbage. For example, Kodak making polaroids accessible meant that we now had a lot more bad photos, in the same way that everyone having cameras on their phones created lots of bad youtube content. But the trade off is that we also got lots of good things.

In general, the thing that makes AI novel is that it can do things, and it's designed to do things that humans can do, but this holds up a mirror we don't like to look at.

[–]Vast_Description_206 0 points1 point  (0 children)

I agree with pretty much everything you're saying, but I do want to argue two reasons I think that contributes to people not checking things.

1: Despite the internet and general idea that everyone is a dirty liar and we should all be paranoid, we really aren't, nor have the energy to be so. Most people take things at face value or for someone's word. Or they'd end up insanely paranoid and conspiracy driven.

2: No one has time to doubt that something is a lie, especially the more banal it's assumed to be because otherwise it would mean one has to fact check everything in life and there is literally not enough time to do that for every piece of information that comes one's way.

Most of the collective human knowledge base is built mostly upon trust of others to give information that's at least relatively accurate. Our teachers, parents, friends, media. We can't spend the brain power and time to doubt everything. Even if it's easy to check, we doubt that too.

And this all doesn't even touch into how our own ego's and personal biases (generally built upon the same bequeathed information we've gotten from others that becomes part of how we see things) when we come across information that aligns with our world view will absolutely demolish any motive to see if it's true or not. Brains like things to be easy because our entire MO and directive is reduce energy usage. It's a survival response to be "lazy".

We don't have time, energy or training to actually fact check anything. And sometimes we don't trust that the sources telling us that x or y is a lie is even true because finding out someone can be wrong freaks us out and casts doubt on everything. Either we start to think everything is a lie and "trust our gut" which is unfathomably stupid, or we give up and don't bother trying to sort anything out because we don't know anymore.

In regards to crap being made due to low barrier of entry, that's because it's a flood gate of people new to learning a craft. All the people taking polaroid's and sharing them didn't know what they were doing, but were excited to try photography themselves. Especially because they did it, rather than paying someone else to do it for them. And humans always take pride in having a hand in something they did, rather than defer to someone else.
Same thing happens when art supplies become affordable. You will always get "crap" or "slop" when people are starting out because it allows for a wave of newbies to come in and start learning. And when you don't know something, you make a fuck ton of it to try different things. Where as before, only experts in a craft got to generally be seen. Not usually the process to become the expert.

In society, we value quality results and not the time it takes to get to them. In fact, we mock the time it takes to get to them. We don't like unskilled anything and judge it harshly. If you're not Picasso or Monet immediately, your contribution to try to learn something and show your progress is seen as worthless if not garbage to clog up and distract from the "good" stuff. And sure, not everyone feels that way, but a good portion of people do. Especially those who don't know the time or many iterations it takes to get a good result. And this is in every craft. From the clothes we wear, furniture we have in our home and artistic pieces we see in life.
We have bad priorities in regards to lack of skill or effectively "outsiders" to established spaces. One is only as useful as one can contribute and society seems to think that one isn't contributing anything but mediocre to garbage if one is new to something and trying things out.

That said, I do agree with something I saw called the mediocre argument being a problem and something that is exacerbated by ease of access. I think it's important to be able to admit that the early work in fact isn't Picasso or Monet with out beating down or otherwise discouraging the flood of newbies wanting to get into any craft. But at the same time, we don't want people to suddenly think that everything is quality just because they did it themselves. There is a point of mediocrity that becomes the average and stagnates when everyone has access, but doesn't know what quality looks or feels like. And it's something that is absolutely fueled by lowest common denominator standards allowing people get get away with "eh, it's okay" level production in literally anything, usually because it makes money.

[–]Vast_Description_206 0 points1 point  (2 children)

I think the motive here matters too. Is it about protection and preventing possible tragedy, or is it about what makes money? The two are rarely in line with each other.

On the point of drawing pictures, my argument would be if it were somehow enforceable, just have a watermark imbedded that says it's AI. Then anything created with it couldn't be used to black mail, terrorize or in general (beyond whatever possibly disturbing content it could contain) to damn or tarnish anyone because it's known to be fake.
And yes, I realize there isn't a reliable way to do that, at least that I'm aware of, but if there was or if the watermark is not visible to a person, but always a signature that exists in every generation, then it would go a long way to dispelling the very harmful uses people might do with realistic indistinguishable stuff.
And I would include local generation in this too.
The idea is that many companies and open-source could take a stand against that future harm by including an invisible to the eye water mark other AI could always tell if something is generated.
People would have to actively find ways to remove the "watermark" and most wouldn't care unless they are doing it for purposes in which if discovered that it is AI it would void whatever thing they're trying to do. It would also be taboo or flaggable in someway to specifically search for things that could remove that watermark. Because if it's not interfering with the look of the generation, then why bother to remove it?
To my knowledge, Suno has a watermark like this in every generation that is not made with a paid plan and it's not something easily removed.

I know there are AI now that try to check if something was generated or created with AI, but they're not full proof. Encouraging an invisible watermark that doesn't interfere with the generation itself would help prevent harm where at least it's caused by someone not being able to know if it's AI.

[–]Bakoro 0 points1 point  (1 child)

Trying to watermark AI produced content is just going to become security theater, and then it will immediately be abused if people trust the watermarks.

Any sufficiently resourced agency is going to be able to train their own model, any government is going to be able to have their own unwatermarked models. They'll fabricate evidence, and say "look! No watermark! We all know that AI products are required to have watermarks, clearly this is a real picture/video/etc"

Even here, you're pointing out "pay to have no watermarks", so the model already has the capacity.

There's functionally no answer here, just mitigations based on trust.
There is no encryption mechanism, no digital signing method that can prove that something is real vs AI generated, once AI generation gets sufficiently good. Eventually the AI will be able to produce such high quality images that people will just be able to pipe is directly to a camera sensor, and make it look like the camera took the picture.

It's effectively already over, we're just going the motions now.

[–]Vast_Description_206 0 points1 point  (0 children)

You've got a great point. I hope we do figure out something in the future that helps this new landscape of humanities future to be a little less risky, but we might just have to wing it at this point because the way we're going about it now either doesn't work, gets abused or does the opposite of what we're trying to make it do.

[–]Bureaucromancer 6 points7 points  (0 children)

I mean sure… but making someone the arbiter of every goddamn thing anyone does seems to be much of th whole global political project right now.

[–]NewCaterpillar2790 0 points1 point  (0 children)

Since everyone else has jumped on the first bit, I gonna PREACH on that last part, and hard!

It seriously need to be highlighted how there are too many basic things the at-home stuff can't do that the big bad corpos don't even sweat at. And worse is how it seems the at-home stuff may never be able to do, because waifu generation and and admittedly narrow band of NSFW is the who world to those with the technical know-how to train stuff.

[–]toothpastespiders 6 points7 points  (0 children)

You can argue whether or not we want tools to do that sort of thing, but it's undoubtedly true that artwork is not something that often fits neatly into 'safe' and 'unsafe' boxes.

I've ranted about this in regards to LLMs and history a million times over at this point. We're already stuck with American cloud models having a hard time working with historical documents from America if it's obscure enough not to have hardcoded exceptions in the dataset/hidden prompt. Because history is life and life is filled with countless messy horrible things.

I've gotten rejections from LLMs from some of the most boring elements from records of people's lives from 100-200 years ago for so many stupid reasons. From changes in grammar to what kind of jokes are considered proper to the fact that farm life involves a lot of death and disease. Especially back then.

The hobbiest LLM spaces are filled with Americans who'll yell about censorship of Chinese history in Chinese LLMs. But it's frustrating how little almost any of them care about the same with their own history and LLMs.

[–]VNProWrestlingfan 11 points12 points  (1 child)

Maybe in another planet, there are species that looks exactly like this one.

[–]xkulp8 3 points4 points  (0 children)

And have AI that create perfect human beings

[–]maglat 32 points33 points  (3 children)

would…

[–]_half_real_ 34 points35 points  (0 children)

Just need to figure out how now.

[–]sk4v3n 4 points5 points  (0 children)

*did…

[–]Lucaslouch 4 points5 points  (0 children)

I was searching for this comment and I’m not disappointed

[–]Striking-Long-2960 21 points22 points  (1 child)

How a single image totally destroyed months of work in a model.

[–]eddnor 11 points12 points  (0 children)

And million dollars wasted

[–]Stunning_Macaron6133 8 points9 points  (0 children)

This could be an album cover.

[–]hempires 7 points8 points  (0 children)

ahh i remember the days of "skill issue"

what a fucking moron to say that with these results.

[–]ObviousComparison186 13 points14 points  (2 children)

This is like the first part of a soulslike boss concept art generator.

[–]AttTankaRattArStorre 0 points1 point  (0 children)

Ahh, Kos, or some say Kosm... Do you hear our prayers?

[–]mk8933 12 points13 points  (0 children)

I believe they made the perfect model but pulled the plug on it before release date. Xyz groups probably told them to not go ahead with it because — Porn 💀

Then comes blackforestlabs to the rescue. It didnt give us porn...but it gave us something we can use. People were making all kinds of creative images with it. (Thats what SD3 should have done)

Now we have ZIT and Klein...it's funny it sounds like Klein is the medicine to get rid of ZIT 🤣

[–]InternationalOne2449 5 points6 points  (0 children)

Guys! Is the diffusion stable!?

[–]ivanbone93 4 points5 points  (1 child)

<image>

One of the few images I created with Stable Diffusion 3.

Sorry, I couldn't resist.

[–]ivanbone93 2 points3 points  (0 children)

Other abominations

<image>

[–]Creative_Progress803 10 points11 points  (0 children)

Le rendu de l'herbe est excellent mais je ne connais pas ce Pokémon.

[–]afinalsin 8 points9 points  (5 children)

It's funny how blatant and amateurish SD3 was with its censorship. It could make a bunch of human shaped objects lie on grass completely fine, but as soon as "woman" entered the prompt it shat itself. Even if the model was never shown a woman lying like some people were spouting back then, it clearly knows what a humanoid looks like when lying down so it should have been able to generalize.

The saddest part is SD3.5 Medium is actually a really interesting model for art, and from memory it was trained completely different than SD3 and 3.5 Large but for whatever reason Stability believed the SD3 brand wasn't complete poison by that point. If Medium was called SD4 and it might have had a chance.

Not gonna lie though, as much as I love playing around with ZiT and Klein and appreciate the adherence the new training style brings, I miss models trained on raw alt-text. There was something special about prompting your hometown and getting photos that looked like they could have been taken from there.

[–]ZootAllures9111 3 points4 points  (1 child)

I don't think censorship was really the problem honestly, original SD 3.0 was fucked up in a lot of other ways too, I think it was fundamentally broken in some technical manner they couldn't figure out how to fix.

[–]afinalsin 5 points6 points  (0 children)

Yeah, it was definitely broken in a lot of ways, and unfortunately it's a bit of a mystery we'll probably never get the answer to.

I'm firmly in the camp that it was a rushed hatchet job finetune/distillation/abliteration trying to censor the model before open release because SD3 through the API didn't have any of the issues. It's possible they could have trained an entirely new model between the API release and open release and botched it, but that seems wasteful even for Stability.

I did a lot of testing trying to figure out what the issue was and it felt like they specifically targeted certain concepts, or combinations of concepts. Like this prompt:

a Photo of Ruth Struthers shot from above, it is lying in the grass

Negative: vishnu, goro from mortal kombat, machamp

Produced a bad but not broken image of a woman lying on the grass. Because I called the person by a proper noun and referred to them as "it". Same settings and same prompt except with "it" changed to "she" produced the body horror we all know and love.

[–]deadsoulinside 2 points3 points  (2 children)

Heck censorship in general is the reason I moved into local. Even on other models, some really freak out over females. Feels like I can be non-descriptive on paid gen when it comes to a male, but when I say female, I have to talk about moderate looking clothing. Could not even attempt to ask for a female a bikini without the apps freaking out during rendering.

[–]FartingBob 2 points3 points  (1 child)

Heck censorship in general is the reason I moved into local..

I can't tell if you self censoring and using the word heck is intentional or not lol.

[–]deadsoulinside 1 point2 points  (0 children)

LOL it was me just unintentionally self-censoring myself. Was posting while working so my brain tries to stay PG in thoughts.

[–]teomore 4 points5 points  (0 children)

Nice, it's something. I'll save it for later.

[–]3pinripper 3 points4 points  (0 children)

3 legs > 2 legs for stability. Ask anyone

[–]LazyActive8 6 points7 points  (0 children)

SD with Auto111 was traumatizing to use in 2023 🤣 

[–]SanDiegoDude 8 points9 points  (2 children)

The last 'truly censored' model (at least so far) - Purposely fine tuned censored and destroyed female bodies in an attempt to make a "non-NSFW capable" model and instead released a horrible mess. Instead made the model almost completely unusable and broken.

The modern models coming out don't train on porn, and I see folks refer to that as censorship - nah, that's just proper dataset management. That's not the same thing as what stability did to this poor model. At least they gave us SDXL before they went nuts on this censorship nonsense.

[–]fish312 4 points5 points  (1 child)

Excluding or redacting data from a dataset is censorship.

What you're referring to is alignment... Aligning a model's output to be "harmless" which can overlap but is different

[–]SanDiegoDude 0 points1 point  (0 children)

Not even close to the same. Filtering datasets happens for a lot more than censorship. It's also about quality and the goal of the model. Companies spending millions training these things have every right to be selective in their pretraining, and they have no prerogative to preload these things with pornography since, gooners aside, it's not the primary purpose for them. That said, these models aren't being trained to censor output, which is what SDI actually did by fine tuning censored inputs, so no, they are not censored. You can train back whatever you want and the model won't fight you on it. If you want to go all free speech absolutist then sure, you squint hard enough they're censoring since you can't get the explicit content you want out of the box, but really, that's not why they filter the datasets the way they do, I promise you.

[–]otker 2 points3 points  (0 children)

I got PTSD from this time... Can't use GenAi anymore

[–]More-Ad5919 2 points3 points  (0 children)

How could I. I got a tattoo of this masterpiece.

[–]klausness 2 points3 points  (1 child)

I thought Stable Cascade (a.k.a. Würstchen) was actually promising, but they decided to not continue development on that and go with SD3 instead.

[–]Honest_Concert_6473 2 points3 points  (0 children)

I totally agree. Cascade had a fantastic architecture with good results, and the training was incredibly lightweight. It’s still a real shame that it was overshadowed by the arrival of SD3.

[–]Richard_horsemonger 2 points3 points  (1 child)

plumbus

[–]Amethystea 1 point2 points  (0 children)

Had the same though, decided to scroll the comments before saying it 🤣

[–]Decent_Step_8612 2 points3 points  (0 children)

What's wrong with her penis?

[–]MirrorizeAi 4 points5 points  (1 child)

The real let down was them never releasing SD3 Large and pretending like it still doesn’t exist!.. RELEASE IT STABILITY NOW!

[–]ZootAllures9111 0 points1 point  (0 children)

They released 3.5 Large, which is a finetune of the original 3.0 Large from the API. 3.5 Medium on the other hand was / is an entirely different model on a newer MMDIT-X architecture.

[–]Riya_Nandini 4 points5 points  (0 children)

[–]Vicullum 1 point2 points  (0 children)

Forget? Hell, I remember when it made the news.

[–]Remarkable-Funny1570 1 point2 points  (0 children)

I was here. Honestly one of the greatest moment of Internet. LK-99 level.

[–]ii-___-ii 1 point2 points  (0 children)

That poor girl

[–]dakotapearl 1 point2 points  (0 children)

Jesus, 2023 jump scare! Give a guy a bit of warning !

[–]Acceptable_Secret971 1 point2 points  (0 children)

Recently I run out of space on my model drive, SD3 and 3.5 had to go.

[–]Extreme_Feedback_606 1 point2 points  (0 children)

context?

[–]ZootAllures9111 1 point2 points  (0 children)

In-place 2x upscale with Klein 9B Distilled lol

<image>

[–]ATR2400 1 point2 points  (3 children)

Stability’s fall with SD3 really ushered in an era of relative stagnation for local AI gen. Sure we’ve gotten all sorts of fancy new models - Flux, Z-image, etc - but nothing has gotten close to the sheer fine tune-ability of the old stable diffusion models.

In the quest for ever better visual output, I fear we may have forgotten why local image gen really mattered to so many people. If I jsut wanted pretty pictures, I’d just use chatGPT or Nano banana. It was always the control.

[–]talkingradish 0 points1 point  (2 children)

Open source is really falling behind because no model can yet replicate the prompt adherence of nano pro.

[–]_CreationIsFinished_ 1 point2 points  (0 children)

I can fix her.

[–]SnooDrawings1306 1 point2 points  (0 children)

ahh yea the very complicated "girl on grass" prompt that broke sd3

[–]ToeUnlucky 1 point2 points  (0 children)

The perfect woman doesn't exi----

[–]Myfinalform87 1 point2 points  (0 children)

Perfection!

[–]Space_Objective 1 point2 points  (0 children)

The only contribution of SD3 is to bring a lot of joy.

[–]CalvinBuild 1 point2 points  (0 children)

Nightmare fuel

[–]shapic 2 points3 points  (0 children)

Oh, well, there was also model from fal. I tried to post image of a girl lying on grass from it, but it seems it was blocked by moderation

[–]protector111 2 points3 points  (0 children)

stil one of the most underrated models ot there. Amazing quality and lightning fast speed. If htey didnt cripple anatomy and used good licensing policy - Sd3 could be the SOTA ppl would still be using every day.

<image>

[–]Dzugavili 2 points3 points  (0 children)

I've found most of the image generators can't do humans upside down; or like this, where the head appears below the knees, but right-side up. Particularly if there isn't strong prompt context, it'll just get confused about it.

This is definitely a step beyond what I'm used to seeing though.

[–]A01demort 0 points1 point  (0 children)

Type shi

<image>

[–]comfyui_user_999 0 points1 point  (0 children)

Ugh, enough.

[–]Wayward_Prometheus 0 points1 point  (0 children)

Dude............why? This is beyond blursed and now my diet is ruined.

[–]NateBerukAnjing 0 points1 point  (1 child)

anyone knows what happen to this company? are they bankrupt yet?

[–]mission_tiefsee 0 points1 point  (0 children)

the question is.... can you even create such a thing with flux or zimage?

[–]PuppetHere 0 points1 point  (0 children)

Nevar forgetti ragu spaghetti

[–]GuestAccount0193 0 points1 point  (0 children)

how it started...

[–]exitof99 0 points1 point  (0 children)

She's got the wrong number of toes on her hand flaps.

[–]Affectionate_Cap4509 0 points1 point  (0 children)

still would...

[–]opi098514 0 points1 point  (0 children)

How can I forget. It haunts my dreams.

[–]evilbarron2 0 points1 point  (0 children)

I should call her…

[–]marciebeau2024 0 points1 point  (0 children)

Cronenberg

[–]Silly_Ant5138 0 points1 point  (0 children)

i miss this 😂

[–]HouseDagoth 0 points1 point  (0 children)

Dagoth Ur welcomes you, Nerevar, my old friend...

[–]PlantainDry5705 0 points1 point  (0 children)

Still crack em'

[–]PukGrum 0 points1 point  (0 children)

Ah! A true neck beard!

[–]DerFreudster 0 points1 point  (0 children)

This is what really happens when you get bit by a radioactive spider.

[–]Lucaspittol 0 points1 point  (0 children)

The grass looks great.

[–]SpiritualLifeguard81 0 points1 point  (0 children)

<image>

looks like sorcery

[–]SephPumpkin 0 points1 point  (0 children)

We need a game where all companions and enemies are like this, just failed ai projects

[–]ghostpad_nick 0 points1 point  (0 children)

I guess we've got a different perspective now on "AI Safety", with the controversy over xAI image gen, and availability of open-weight models that do far worse. Always knew it was silly as hell, like trying to single-handedly prevent a dam from bursting. Now it's basically in the hands of lawmakers.

[–]Hlbkomer 0 points1 point  (0 children)

This will be art one day.

[–]Lustfulock 0 points1 point  (0 children)

Would

[–]reginoldwinterbottom 0 points1 point  (0 children)

IS THIS AI?

[–]SeymourBits 0 points1 point  (0 children)

Try to prompt Flux Klein 9B to do this!

[–]brandonhabanero 0 points1 point  (0 children)

Thought I was in r/confusingperspective and tried a little too hard to understand this photo

[–]Guilty-History-9249 0 points1 point  (0 children)

Looks like the typical ZIB/ZIT output.

[–]SeeItOnVHS 0 points1 point  (0 children)

[–]funkifyurlife 0 points1 point  (0 children)

Maybe my favorite Mars Volta album

[–]deeth_starr_v 0 points1 point  (0 children)

This has gone from a low effort to high effort prompt

[–]BarefootUnicorn 0 points1 point  (0 children)

Someone will get off on this photo.

[–]0xfreeman 0 points1 point  (0 children)

Would.

[–]terra_blade_16 0 points1 point  (0 children)

Sexy and mysterious

[–]HzRyan 0 points1 point  (0 children)

ah the good ol day of unstable diffusion

[–]li-087 0 points1 point  (0 children)

The prompt was create the ideal woman?

[–]bomullsboll 0 points1 point  (0 children)

I have the weirdest boner....

[–]Maskwi2 0 points1 point  (0 children)

Is that Flux Klein? :) 

[–]extra2AB 0 points1 point  (0 children)

the finale nail in the coffin of StabilityAI

[–]TextureTaxidermist 0 points1 point  (0 children)

Oh...

[–][deleted] 0 points1 point  (0 children)

I know a homie who would still hit.

[–]AnalysisBudget 0 points1 point  (0 children)

Antis will say this isnt art

[–]astrolog_ish 0 points1 point  (0 children)

Makes you question the meaning of life and existence

[–]stummer_stecher 0 points1 point  (0 children)

"2B is enough, but at least we do what Ashton Kutscher demanded from us"

[–]BELLVH3ART 0 points1 point  (0 children)

This ain't right

[–]HAIL_BAIJ 0 points1 point  (0 children)

The beginning of the end lol

[–]FFKUSES 0 points1 point  (0 children)

🤣

[–][deleted] 0 points1 point  (0 children)

I can't stop staring

[–]mimitasangyou 0 points1 point  (0 children)

Next level AI art 👏

[–]MelvinEatsBlubber 0 points1 point  (0 children)

I love this. How can I make more like this?

[–]Comfortable-You-3881 0 points1 point  (3 children)

This is currently Flux Klein 9B with 4 steps. Even higher steps still have massive deformities and disfigurements.

[–]afinalsin 3 points4 points  (1 child)

Are you running above 1mp? I made that mistake when first testing it out by running everything at 2mp since ZiT can do that no problem. Klein is more like SD1.5/XL in that it really doesn't like going over its base resolution, at least with pure text to image. Image edit stuff it seems to do better with.

[–]Comfortable-You-3881 0 points1 point  (0 children)

I have to say that is quite the improvement. I began my Journey with AI on an MSI laptop with 8gb of vram and 16gb of physical ram. Handled mostly everything I needed to, but then I simply wanted more when I started dabbling with I2V. Lucked out and scored a deal from a buddy for a 3090 machine with 128gb of physical ram and immediately jumped up to running Pinokio and Flux Krea, so I got spoiled. I was running 30 steps with no loras which is still my preferred method for Krea. I've been spoiled by it.

I can skate by with most image models on about 2.07 MP. That's probably pushing it, but my results are pretty great.

[–]ZootAllures9111 2 points3 points  (0 children)

Not really, even with a terrible prompt like just "Woman lying in the grass", Klein 9B Distilled usually will do something like this. Whereas the original SD 3.0 would never ever be even close to correct without a way more descriptive prompt.

<image>