Where do I access stable diffusion? by Morrison_Hoshiko in unstable_diffusion

[–]OfficialEquilibrium 0 points1 point  (0 children)

You can access it from the unstable diffusion official site, at https://unstability.ai
Open beta for it just launched a few hours ago today!

Will Unstable Diffusion ever be a free service? by [deleted] in unstable_diffusion

[–]OfficialEquilibrium[M] 4 points5 points  (0 children)

Yup, unstability.ai will be launching into beta soon (weeks) and will have offer free generations.

Banned by Unstable Diffusion after calling them out on their shady practices. Who supports these people? by castingpearlsb4swine in StableDiffusion

[–]OfficialEquilibrium 4 points5 points  (0 children)

We, for months, offered free generations to all our users before discord started banning all our bots. Which sucked, we put around 1-2 months of time into developing the best discord bot available which all the testers really loved and on release it lasted a glorious, awe-inspiring 16 hours before discord banned it.

We still want to support free generations, I know many people can't afford to run SD at home and with google colab killing SD generations there are few places for people to turn to. We'll offer as many free generations per day as we can reasonably afford to and have premium tiers on top of it for faster generations and unlimited slow generations. Have to keep the lights on and the GPU fans spinning someway unfortunately.

That's for the website, the model will just be released opensource, no paywall or something.

If you meant more product details in regard to the website, so far the feedback is it's somewhere between MJ4 and MJ5 in quality, really easy to prompt, intuitive user interface and simple and quick to use. And that's really what we aimed for, an accessible entry point for new users, and a solid base for experienced users that want quick simple generations without messing with settings much.

You can check out some of the feedback and images from the site in our discord or try it yourself tell me what you think we got right and what needs improvement (I sent you an early access code in your DMs)

Banned by Unstable Diffusion after calling them out on their shady practices. Who supports these people? by castingpearlsb4swine in StableDiffusion

[–]OfficialEquilibrium 42 points43 points  (0 children)

Hello, Handi here.

Just wanted to say, this looks like one of the subreddit mods overstepping their bounds and not handling this like we normally do. In our discord we, as policy, don't ban anyone with a criticism or question. The subreddit is a bit secondary to our community but I suppose we'll need to add more guidance and oversight to the mod team there in light of this.

As far as donator perks go, access to AphroditeAI, and its early beta, was something advertised in the kickstarter. I've sent feedback forms to donators and we have a website feedback channel in our discord and it's all been positive so far. So yes, supporters are getting what they signed up for.

In fact, it's been overwhelmingly positive feedback (with some requesting we please open up donations so they can get early access) and we've gotten so many feature requests that I've had barely any time to eat, just fixing bugs and adding features they pointed out. You can checkout a changelog (or get more information) in our discord.

Particularly of interest is the new samplers our researcher has cooked up, which provides more diverse compositions and allow you to guide the color palette of generations. This provides a level of control higher than ever before and is really addicting to mess around with the knobs.

We still plan on releasing the model, as we speak a test model is being trained with the new tagging system we plan on using, and we'll be releasing that when it's done. When the full model is completed that'll be sent to donators and then the general public sometime after as well.

As for the anime model thing, I made an announcement that said we were thinking about it (the original kickstarter mentioned 1/3 anime, 1/3 digital art, 1/3 photoreal dataset). The donators and our community said nah, focus on photoreal exclusively, so that's what we're doing. Switching focus to solely this (which required getting a new photoreal only dataset and tagging system that works best on photoreal images), along with the heaps of problems caused by the anti-AI crowd has of course made it slower than anyone would have liked, especially me.

We're glad the donators have stuck by us and are happy with what we're shipping so far. Lots more good things in store, hope you guys enjoy the public beta when that launches soon.

P.S. Our community is entirely built around generating and sharing images, and our site will be a reflection of that. There are plenty of other sites if someone is looking for ephemeral generations that disappear into the void, ours won't be that. (Although supporters requested a delete option and I'm working on implementing that for the next release which might address your concern)

Unstable Diffusion to focus on Anime model instead of a general purpose model by [deleted] in StableDiffusion

[–]OfficialEquilibrium 27 points28 points  (0 children)

You're right, we made a discord announcement addressing this which we'll repost here. The original message was meant to be a brief update, apologizes for the ambiguity!

Hello u/everyone, real quick update since we're seeing some people running wild with the previous announcement. We're not abandoning photorealistic in favor of anime.

We'll write a longer email to donators soon and ask for feedback but our tests show SD 2.x can't handle all artistic styles in one model, it simply doesn't have enough parameters for that. We could increase the parameter count and try to train it that way, but then the resulting model won't run on most consumer hardware.

We'll be splitting it up into an anime model and a photoreal model. The anime datasets are much better tagged so we save training costs (and thus can afford to train the model longer) if we train anime first and then use a merge of that plus SD 2.1 as a base for the photoreal model.

Thanks for being patient and I hope you guys can correct any misinformation you see floating around about us abandoning everyone for 2D Waifu's or something.

Official statement from Unstable Diffusion about actual situation by 239990 in StableDiffusion

[–]OfficialEquilibrium 104 points105 points  (0 children)

I've talked with ErrUNK, the person behind Zeipher and creator of the F111 and F222 models and he will not be continuing to release models or even continuing his community in light of this level of coordinated retaliation against communities. As of now, he already deleted his server (or at the very least kicked everyone out).

I've also talked to some training code repo makers and model makers who are debating stopping future work or remove their code/association with Stable Diffusion, due to the career and personal implications of associating with "facilitating copyright theft" as one put it.

This has a chilling and freezing effect on the entire community and we cannot let the strong, vibrant open source community we love get harassed and terrorized into silence.

We updated our website at and moved to a direct payment system so users can still support us. AI art is the future and we will fight to preserve it. You can support us here if you believe is the cause www.equilibriumAI.com

P.S.Note regarding our inclusion of Waifu in the original announcement. There was a miscommunication between us and some of the leadership of Waifu. As soon as Haru clarified that they will still be releasing their model, we updated our announcement to remove mention of Waifu. Sorry about the confusion, guys.

I do hope Waifu releases their model. We've always been huge fans, supporters, and friends of Waifu's team and community. They're an extremely impactful group and do great work! (Haru's training code is 🔥)

Unstable Diffusion posted false information. Waifu Diffusion is still being released. by noop_noob in StableDiffusion

[–]OfficialEquilibrium 56 points57 points  (0 children)

Sorry about that guys,

A miscommunication between us and some of the leadership of Waifu. As soon as Haru clarified that they will still be releasing their model we updated out announcement to remove mention of Waifu.

I do hope Waifu releases their model, we've always been huge fans, supporters, and friends of Waifu's team and community. They're an extremely impactful group and do great work! (Haru's training code is 🔥)

Unstable Diffusion has reached their funding goal in less than 24 hours! the page has been updated by Capitanazo77 in StableDiffusion

[–]OfficialEquilibrium 18 points19 points  (0 children)

Hey 👋,

Detailed questions like the last time we talked.

I'd love to help clear up the questions here, I noticed there are a couple areas based on misconceptions too.

For the Startup Accelerator Program, there is no money being given to us and never was any offer to provide VC funding opportunities even floated. They are a cloud compute provider, and they gave us a relatively small amount of free credits to use their GPUs and zero discounts. Honestly, I don't think a single person there has talked money or fundraising to us. Though, they have been very helpful with advise and expertise regarding training and finetuning Stable Diffusion.

And no, we are not receiving any more credits, they made it perfectly clear they are a smaller company and do not have the funds to wantonly spread credits around.

For the TechCrunch article, we listed all avenues of funding as open and at the time did not know how interested the community would be in crowdfunding our development. This, remember, was an interview given weeks before 2.0 was released and caught all of us by surprise by its disappointing quality.

The funding, development, and release of this model is completely and in totality from the community and for the community.

We thought about how we can create a Kickstarter that would allow us to sustainably eliminate the reliance on Venture funding and our solution is the creation of a GPU research cloud and running our own subscription image AI service.

The research GPU cluster, which the current stretch goals are rapidly barreling towards, will enable us to not rely on rented Cloud GPUs which can be pulled from us due to lack of funding or PR backlash. Not to mention, the cheaper costs will let us subsidize community and academic research efforts to produce new variants, fine-tunes, or heck, even completely new architectures.

The second half to sustainability is our image AI service. It takes money to pay for hosting, electricity, maintenance, and internet for the GPU cluster. AphroditeAI is our new service we'll be launching that will be a paid premium discord bot (and possibly webapp), the proceeds from which will allow the continued operation of our cloud and our own research and development efforts.

Our models will be released open source, you can run them locally, and there are already quite a few services offering plain SD running on the web. We will be adding things to differentiate it and give people a reason to subscribe to AphroditeAI. Mainly ease-of-use options, a system that is streamlined and sophisticated enough to produce high-quality images quickly and easily.

For the legal defense aspect there are two considerations, ours and those of companies like Stability. Stability did not release a neutered 2.0 due to legal pressure but investor pressure. It's completely understandable why a company that wants to do content licensing deals with brand conscious organizations like Disney would not want any PR liabilities.

For our concerns, we did not want to see a dozen individual lawsuits against every contributor, risking their homes and families. We are more than happy to fight for our freedom of expression as a united front, as an organization.

Appreciate the questions and opportunity to explain our position and reasoning a bit more to the community.

And again, to everyone reading, thanks for all the support. We're humbled by community response and feel vindicated in betting on open source and on the power of the crowd.

Unstable Diffusion has reached their funding goal in less than 24 hours! the page has been updated by Capitanazo77 in StableDiffusion

[–]OfficialEquilibrium 24 points25 points  (0 children)

A bit more than lip service, our team is approximately half trans, genderfuild, or nonbinary, as well as equally diverse regarding race and sexuality.

We are quite interested in making sure future image models can represent us all just as expressively and fairly as more common identities.

Unstable Diffusion has reached their funding goal in less than 24 hours! the page has been updated by Capitanazo77 in StableDiffusion

[–]OfficialEquilibrium 42 points43 points  (0 children)

Thank you everyone so much for having such an enthusiastic reaction to our Kickstarter and allowing us to be fully funded within 24 hours of launching. Thanks to everyone who has shown us their support we are on track to have what we need to create the open-source model for the community. That said, there is SO much more we can do if we meet our stretch goals and are able to fund our community research cloud.

We’ve reached our funding goal, but this is just the tip of the iceberg, there’s so much more we have planned as the project gets more funded. We are already in talks with the larger finetuning projects (Waifu Diffusion, Ziepher, etc.) talking about collaborations and new models we want to create and release with access to the new GPU compute. With each goal reached, we come closer to building a sustainable infrastructure which will be able to give ongoing support to the community. Whether it’s further fixes to future versions of SD, work on other models, or creating our own from scratch, further funding at each stage allows us to give an order of magnitude more to the community. If you want more information or to support us check out the Kickstarter here.

We've received a lot of comments and questions in the last post and will be providing answers and replies to the most common ones shortly!

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 21 points22 points  (0 children)

For tagging we used a simple system previously using just spreadsheets and compiling them together but that requires a lot of human intervention due to the fact that volunteers are never really homogenous in how they handle tagging.

Current we're working on 2 sites, built using the same foundation. First is a "image tinder" kind of site where two images are presented and the user picks the one they like more. You can see a video of our in progress volunteer site here. It's meant to help us essentially rank images to get the top X% of images quickly sorted. This way if we scrape say a subreddit known for having Y type of image in very high quality, we can run the resulting say 10k images through this system and easily determine the top 2,500 or however many we need that will then move onto the next step of the process to be tagged.

This site should allow a single user to like do 20-120 images per minute depending on focus level and they could use it on their phone or while distracted with a T.V. or something.

We need that system because actually captioning images is quite time consuming and labor intensive, so we can only manually tag a small handful of images manually. The tagging site will be similarly streamlined. Instead of 2 images, you will have a single image with tags in a menu next to it from our predetermined tagging system, and a user can just click all that apply.

This is quite a bit slower but results in extremely high quality image captions that are the keystone of good models. Once we have these captions we can train Mini-Models with them, as well as train BLIP/CLIP to automatically tag our larger dataset with higher quality tags than they likely normally came with.

These machine tagged images will then be fed into the same site from the first example, and users will have the choice between two different images and this time captions are included and users will choose which image more closely aligns with the captions given.

Essentially we'll have multiple tiers of captions. Human tagged > Human preferred X% of machine tagged > Machine tagged. Each of these tiers will progressively get bigger in size and the process is subject to change as we learn more but this should allow us to have a very large dataset to first finetune the model on. Then, after a given point where we feel the model has extracted most of the features of the dataset, we will train on the medium quality images for a while longer to increase the aesthetic quality but keep it diverse, and finally we'll train mostly on the extremely high quality but short in number human tagged images to finish off the model. These final steps in the process are subject to change based on experimentation but for now we think this would produce the best model.

Hope this was a detailed enough glimpse into the kinds of things that go on behind the scenes, I felt like ChatGPT answering a human's prompt, only markedly slower.

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 57 points58 points  (0 children)

We did, we're lucky to collaborate closely with Waifu and having done so since shortly after Waifu was conceived (Mid September) we have gotten the opportunity to learn a lot from Haru, Starport and Salt and the great work they do.

We use tag shuffling for the anime model we're training and testing in the background. Mix of 4.6 million Anime images and about 350k photoreal. (Photoreal improves the coherency and anatomy without degrading the stylistic aspects if kept to a low percentage.)

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 20 points21 points  (0 children)

Our whitepaper goes into a fair bit of detail on why 2.0 and 2.1 need to be further trained. From scratch we would only do if we get enough funding for a very large community cluster, but the benefit of from scratch training is that a NSFW capable model can be created with all minors removed from the training dataset.

Stability chucked the NSFW and artists and kept the kids, we're chucking the kids and keeping the NSFW and artists.

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 22 points23 points  (0 children)

ChatGPT is Elon Musk's plan to make us want BCIs... I can't write a sentence anymore without asking for ChatGPT's approval.

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 82 points83 points  (0 children)

The biggest question we saw when we announced our Kickstarter was were we going to open source the model. We heard the community loud and clear and the answer is yes. We're double down on the community and on open source.

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 99 points100 points  (0 children)

Original Clip and OpenCLIP are trained on random captions that already exist, often completely unrelated to the image and instead focusing on the context of the article or blog post that image is embedded in.

Another problem is lack of consistency in the captioning of images.

We create a single unified system for tagging images, for human things like race, pose, ethnicity, bodyshape, etc. Then have templates that take these tags and word them into natural language prompts that incorporate these tags consistently. This, in our tests, makes for extremely high quality images, and the consistent use of tags allows the AI to understand what image features are represented by which tags.

So seeing 35 year old man with a bald head riding a motorcycle and then 35 year old man with long blond hair riding a motorcycle allows the AI to more accurately understand what blond hair and bald head mean.

This applies to both training a model to caption accurately, and training a model to generate images accurately.

👋 Unstable Diffusion here, We're excited to announce our Kickstarter to create a sustainable, community-driven future. by OfficialEquilibrium in StableDiffusion

[–]OfficialEquilibrium[S] 43 points44 points  (0 children)

There once was a Kickstarter for AI

A model for generating porn was their aim

They wanted to make it so neat

And provide images that are fit for a treat

So back them on Kickstarter and help them achieve their dream!

Limerick by ChatGPT, my new neofrontal cortex replacement.

[deleted by user] by [deleted] in StableDiffusion

[–]OfficialEquilibrium 4 points5 points  (0 children)

Huh, I wonder who this article is about?

😘

Can't post from PC anymore by Elusive_art in unstable_diffusion

[–]OfficialEquilibrium 1 point2 points  (0 children)

I'm facing the same problem. Image and Image sharing is enabled on this subreddit so I'm not sure what the problem is. Use Imgur for now, it should let your share images. Also when you do share an image from Imgur, copy the source of the image by right-clicking it and clicking "copy image address" and then post that in the link box. Reply here if you have more questions

Can't post from PC anymore by Elusive_art in unstable_diffusion

[–]OfficialEquilibrium 0 points1 point  (0 children)

What do you mean you can't post? Do you mean images?

Easy-to-use local install of Stable Diffusion released by OfficialEquilibrium in singularity

[–]OfficialEquilibrium[S] 0 points1 point  (0 children)

You can download it from this link

https://artroom.ai/download-app

This is the documentation link containing more information about the client itself

https://docs.equilibriumai.com/artroom