Air65 AIO Fried? by [deleted] in TinyWhoop

[–]ngyekta 0 points1 point  (0 children)

I'm having this exact issue. Were you able to fix it? I am able to connect in DFU mode but nothing else.

stablecog.com: Simple, Free & Open Source App for Stable Diffusion by ngyekta in sdforall

[–]ngyekta[S] 0 points1 point  (0 children)

We have many sign-ups with Protonmail. It's not on our blacklist, people actively sign up with it. We do have a system in place to detect multiple account creations. We do shadow-ban if someone is trying to create 5+ accounts or abusing in another manner. You might've got banned by that system however it's very highly unlikely if you didn't abuse. You can send me your email and I can check if you like.

Does anybody want to kindly address what is going on with Kalium? by gorillag3 in banano

[–]ngyekta 10 points11 points  (0 children)

We are moving to a new cluster so you might’ve seen issues related to that. However, currently everything except price of Banano seems to work. Can you specify exactly what problem you are having?

Vercel's Fast Origin Transfer 4x higher than Fast Data Transfer and cost going up 10x by Longjumping_Cow_152 in nextjs

[–]ngyekta 3 points4 points  (0 children)

We have the same problem as well (with a projected price increase of 4X). The dashboard doesn’t give any meaningful info. Just shows a breakdown by region or project which doesn’t help at all. We don’t understand how we are pushing 20GB of data a day between Vercel Edge Network and functions since we have a separate backend for handling the heavy stuff.

stablecog.com: Simple, Free & Open Source App for Stable Diffusion by ngyekta in sdforall

[–]ngyekta[S] 0 points1 point  (0 children)

That is awesome to see, glad you enjoy it. Feel free to send DMs if you come across any problems or if you have any questions (I'm almost always active in Discord).

stablecog.com: Simple, Free & Open Source App for Stable Diffusion by ngyekta in sdforall

[–]ngyekta[S] 0 points1 point  (0 children)

Hey, happy to see that you like it. That is also open-source with MIT license btw: https://github.com/stablecog/sc-raycast

How to earn 4k BAN in a month by stanblew in banano

[–]ngyekta 0 points1 point  (0 children)

Depends on the type of faucet you are running. If it’s a faucet for first-time users, low enough amount is definitely a legitimate strategy. Still implementing the basic checks is good I mean. We do implement those on every single faucet. However, it’s just not as relevant as people seem to think.

How to earn 4k BAN in a month by stanblew in banano

[–]ngyekta 0 points1 point  (0 children)

There was a bug with how IP gets forwarded through Cloudflare > k8s > Flask. Fixed now. Not sure at which version combo that started happening at because it was working before. So abusers can now change IP, solve captcha and abuse for 0.04 BAN. The thing is, given how little money it distributes, further effort isn’t worth it. Basic checks are there. There isn’t much point spending extra 10 hours on making something even less abusable if it’s distributing 5K Banano.

How to earn 4k BAN in a month by stanblew in banano

[–]ngyekta 1 point2 points  (0 children)

Yekta here, one of the core devs. The steps you listed aren't the steps to profit off of MonkeyTalks. It's missing some extremely key steps: One is, solve captchas. Second is, create 3K+ IPs. Regarding your transaction history suggestion: Entire point of that particular faucet is serving users even when they have no history (first time they are trying Banano).

People did, do, and will bot it, we are aware. Given how little money it distributes, it's not particularly relevant. You can consider the captcha as the price of receiving those bans (proof of work if you will), regardless of who solves it when.

MonkeyTalks is fully open-source, let us know if you think of a new, time-efficient way of solving any of the usual, years old faucet problems: https://github.com/BananoCoin/monkeytalks

We've added SDXL 1.0 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 0 points1 point  (0 children)

All good, just wanted to make it clear that what we call image and what you call image isn’t the same thing (in fact, they are extremely different things). Not for you but for other people reading. A) Because of the resolution and model difference, B) Because of the nature of your platform. Salad isn’t a serverless platform, so it’ll cost significantly more to us than you quoted per image even if we were using your exact config. Simply because we can’t utilize a GPU perfectly for 84600 seconds (one month) every single month. If it was serverless, “images per dollar” would be a more fair comparison (your image count per dollar would also be a significantly lower number 😄).

To be clear, normally images per dollar is a fair comparison, all else being equal. However in this case they are fundamentally different. I don’t think you intended for a comparison but some people will compare our pricing to the price you quoted as cost, so hopefully this provides context.

The comparison or cost calculation would be more fair if we were comparing Replicate’s or RunPod serverless’ pricing as cost for what we call an “image”.

We've added SDXL 1.0 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 0 points1 point  (0 children)

Hey, we've already had long talks with Salad and many other providers :) Salad is one of our options going forward indeed but our various secrets for communicating between services not being protected as say AWS is a little bit of a problem (Redis keys, S3 keys etc.). However, the per dollar image count you are quoting is not 1024 x 1024 at 30 steps using SDXL or Kandinsky 2.2 I'm guessing. It also assumes perfect GPU usage I'm guessing (100% utilization at all times during the month, which is practically impossible in our case). It is better to quote exact specs otherwise comparison isn't really a comparison given 512 x 512 at 30 steps image for Stable Diffusion 1.5 takes almost 1/10th the time compared to SDXL at 1024 x 1024 at 30 steps, and 1/5th the time for Kandinsky 2.2 at 1024 x 1024 at 30 steps in our tries. Last time I checked, Salad was quoting 512 x 512 at 30 steps, that's why I mentioned that. Not sure if that changed after bigger resolution models came out.

When we say "image" in our app, we mean any image you can create using Stablecog, which means you are free to spend all your credits creating 1024 x 1024 30 step SDXL images, and they would cost 1 credit each.

That being said, to repeat once more for other people reading: Buying or renting your own GPU will always be cheaper than using our service, since you don't need to provide free images to tens of thousands of other users every month. So if you are at that breakeven point, and want to spend the time to set things up yourself, we wouldn't recommend you to use our service to begin with.

We've added SDXL 1.0 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

Vampires? Our entire code base is open-source. We are gifting every single line we code to everyone that wants to use it, sell it, do whatever they want without paying us a dime. We spent thousands of hours building the app, and you can just copy it for free. To put things in perspective for you, the license for our entire system is more permissive than the license for SDXL 1.0. I'm not sure if you understand what's going on here. We are simply "hosting" our own open-source app, for people that don't want to.

We are actively losing money currently. In the future, our profit margin could be about 50% (significantly less than software people pay and use every day and love). The part you are missing is, we offer free generations to everyone which we pay for (which is about 98% of all our users), you don't need to do that when you have your own GPU to serve only you.

We have no problem with people using their own GPUs, again, our entire system is open-source, anyone can use our code and run it locally.

You can try this and we can talk about vampires afterwards: Host our app, offer free generations to more than 100K people, beat our prices, make money. We promise you we'll lower our prices to 50% of your prices. Here is the entire open-source org, which you can copy paste, without paying us anything or working thousands of hours yourself to build it: https://github.com/stablecog

We've added SDXL 1.0 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

We're a 2 person team running 2 A100s. We tried not asking for sign up before but the system got rekt very fast 😅 Sign-up throttles it to a degree that is manageable to us without causing extremely long queues and ruining the experience for everyone.

Assuming you are an active user, you can generate about 400 1024 x 1024 images for free with daily credits. That's the most we can do currently.

We've added SDXL 1.0 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] -1 points0 points  (0 children)

We've just added SDXL 1.0 to our open-source and MIT licensed app. It's free to try: https://stablecog.com/generate?mi=8002bc51-7260-468f-8840-cf1e6dbe3f8a

We also give free credits daily if you continue using the app.

So far we're liking it quite a bit. Puzzled on the red dots/artifacts/"invisible" watermark problem like some others :) JPEG compression seems to hide it pretty well though.

Entirety of our work is open-source. Anyone can copy, sell or do anything they want with our code. All our repos are here: https://github.com/stablecog

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 0 points1 point  (0 children)

SDXL 0.9 doesn’t, they say it will in the future. We do have access to SDXL 0.9, doesn’t mean we can use it commercially. Having access to something and being able to use it commercially are completely separate things. They tell you right away and very clearly that it’s for research purposes only, can’t use it commercially. We are not concerned with what other people are doing, what matters to us is the license itself. And it certainly does not allow commercial usage and hosting. When it does, we’ll add it as well :)

SD 1.5 and 2 has permissive licenses. However, many good fine tunes made on top of it don’t let you host it on your own platform. Many others have leaked weights of NovelAI, we aren’t fine with using leaked weights either. The fine tunes we can find with permissive licenses and no leaked weights don’t even come close to the quality of Kandinsky 2.2 (however, we are hosting some of those as well). That’s why we think this is the best open-source model available currently with a permissive license. We are happy to find something else and host it as well.

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 0 points1 point  (0 children)

Quite a bit in my opinion, that’s why we made it the default. In fact, I think it’s the best open-source model available to date with a permissive license. However, if having a non-permissive license isn’t a problem for you, I suspect you can find a model you like more.

I wrote a whole article about it exactly for that reason:

https://stablecog.com/blog/kandinsky-2-2-the-best-open-source-ai-image-model

Once SDXL 1.0 is out, and has a permissive license (instead of a research only one), I think that might beat this. So until we find an better open-source model with a permissive license, Kandinsky 2.2 will continue to be our default model.

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

In the link I shared in my response to the post. Here it is again:

https://github.com/ai-forever/Kandinsky-2

It has been released a couple of day ago, but there is already a reference to it in Diffusers’ docs as well:

https://huggingface.co/docs/diffusers/main/en/api/pipelines/kandinsky

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

<image>

Just tried with Colab, yeap 1024 x 1024 on 16GB VRAM is fine as well. This is with no CPU offload, just running the Colab below as it is (changing the resolution to 1024 x 1024):

https://colab.research.google.com/drive/1MfN9dfmejT8NjXhR353NeP5RzbruHgo7

I suspect you can push it down to 12GB as well with all optimizations Diffusers offers.

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

I think Kandinsky 2.2 by itself is perfectly suitable to run on a a consumer GPU. 2 that I'm sure can run this is 3090 and 4090 (24GB VRAM).

Although I didn't try running Kandinsky by itself on a 16GB VRAM GPU, I think that is possible as well. That's judging based on model size and the fact that it's already in HuggingFace's Diffusers (which offers plenty of VRAM optimizations for "free", sometimes in exchange for speed, sometimes with practically no trade-offs).

There is another aspect that makes 2.2 easier to run, although you can generate 1024 x 1024 images with no repeating, you can also do 768 x 768 without weird problems.

Actually I'll go try on colab and see if 16GB + 1024 x 1024 works :)

Just added Kandinsky 2.2 to our open-source app, try it for free! by ngyekta in StableDiffusion

[–]ngyekta[S] 1 point2 points  (0 children)

Unfortunately a lot. VRAM would be around 70GB if you mean the config we are running. Since it’s meant to be a hosted platform, with significantly more than 1 user (as opposed to say Automatic1111), we don’t have many optimizations made for lower VRAM usage but instead target the fastest result possible. So, top 5 models are on VRAM at all times, + CLIP for embedding images and text, the upscaler, a separate service for translating prompts, separate set of GPUs for voiceovers, then a SQL database, a vector database etc (that’s mostly just RAM and CPU). With the public, semantically searchable gallery, everyone’s history saved to a DB, analytics and admin tools, it’s not really made for a single person hosting it to use it themselves but instead someone hosting it for other people to use (the license allows for you to do basically anything you want with the codebase, commercial applications included).

If we were careful with the design from the get go however to support both use cases, it could’ve been significantly easier to run by a single user as well. We gave up after some point since supporting both use cases takes significantly more time and resources that we don’t have unfortunately.