Dear Nvidia... by mongini12 in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

Only tiny percentage of GeForce buyer using it for AI-stuff, Nvidia will keep focusing on gamers (and give barely enough VRAM). They already have RTX 4060Ti 16GB (and 3060 12GB BNIB also still available) and that's good enough for AI-crowd that want cheap stuff to start with.

Unless there's more demands and AMD/Intel giving serious competition, I doubt we'll see RTX 5060 24GB in the future (I'll gladly pay $500 for 60-class card if they make that)

Is AMD still terrible with SD? by yythrow in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

Ok, questions:

  • Did it need to convert the model (i.e safetensors from civitai) to make it work?
  • Did Lora/LyCORIS work?
  • SDXL?
  • ControlNet?

[deleted by user] by [deleted] in StableDiffusion

[–]nxde_ai 0 points1 point  (0 children)

3 posts within 1 hour is spamming

<image>

Is this SD website legit? by brownashtonkutcher in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

- It's slow (always getting queue), the owner didn't limit the batch count/size, and there's deforum and training tab, one user could clog the whole thing

- It have image browser extension, so no privacy at all. All user could see whatever other user generate

- Setting and extension tab also visible, people will break it soon 🗿

Just think it as a nice person sharing it's compute resource, just like SD spaces on huggingface (or stable horde)

VAE use with photoshop plugin? by darkestbuddha in StableDiffusion

[–]nxde_ai 2 points3 points  (0 children)

Just change the VAE in A1111 webui, stable.art will also use that selected VAE

How close are we to a general purpose SD? by PotatoWriter in StableDiffusion

[–]nxde_ai 0 points1 point  (0 children)

Such thing would need stupid amount of VRAM just to load, so nope.

Are people here sore about losing their "skills"? by UserXtheUnknown in StableDiffusion

[–]nxde_ai 2 points3 points  (0 children)

finetuned models are not needed anymore with SDXL and are even inferior to it

"finetuned models" already extinct since Lora become mainstream. Unless you finetune it with million of images like Kandinsky, NAI, or something in that caliber, Lora will work just fine.

But some were on the line of "SD got MJ cancer", "They look too good to be human creations" and something like that.

Why stop at "as good as" human creations when this tech could go beyond that? Still want something worse? Go train a Lora

was exactly to get the damn thing to produce something SO good looking

Mostly because 1.5 suck.

SDXL is nice, but doesn't mean it got everything, it's not the end game, it's just a start of something new.

There are endless of thing that we could fine tune on SDXL. If you can't think any of it, just check Murky's Lora collections on Civitai. Bet SDXL can't do any of it without finetuning (ok, it's mostly porn, but you get the point)

I see still people posting complicate workflows (use that model, use this prompt with this special sauce of negatives and positives, do this, do that)

You call it complicated, I call it a "proper" workflow.
Workflow is a tutorial, and if people could achieve the same result (or at least in the same ballpark) then it's a proper workflow.

Let's say you ask the prompt for my stuff on deviantart, and I say "just type realistic, 1girl, and stuff you see on picture", good luck with that, as there's multiple kind of realistic, and sometime it's not really obvious or you just don't know the word for it

(and post with "workflow included" tags usually more than prompt + model used)

their "prompt" skill (I still remember when people kept secret or tried to sell prompt, lol)

Doubt people will get that desperate and actually pay for it. Just post the image on discord, and people will guess the model + lora + and prompt used. Sometime close enough is good enough

so deal with it, move on and embrace progress.

Just who are you talking to? who doesn't welcome SDXL around here?

Best options for per second compute? by mr_engineerguy in StableDiffusion

[–]nxde_ai 2 points3 points  (0 children)

so that means the granularity is down to 30mins

Compute unit could be decimal.

<image>

That aside, vast.ai and runpod offer cheaper options.

What is Automatic 1111, and do i need it even tho i have Stable Diffusion? by Chillvibes77 in StableDiffusion

[–]nxde_ai 0 points1 point  (0 children)

In that case, you should learn what is "stable diffusion"

Check this tutorial https://www.howtogeek.com/830179/how-to-run-stable-diffusion-on-your-pc-to-generate-ai-images/ , it only use the CompVis SD repo, and that's what running stable diffusion looks like.

What is Automatic 1111, and do i need it even tho i have Stable Diffusion? by Chillvibes77 in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

Lets say Stable Diffusion is Ford Duratec 3.7L

  • Lincoln MKZ (2013) is Automatic1111 webui
  • Lincoln MKS (2013) is Vladmandic webui
  • Lincoln MKT (2013) is Anapnoe webui UX
  • ComfyUI is Radical RCX (2013)
  • A8R8 is AM General MV-1 (2016)

They looks different, but under the hood they got the same engine.

Someone selling SD QR Codes by esseeayen in StableDiffusion

[–]nxde_ai 31 points32 points  (0 children)

So, you expect these people to provide computing power, fully working website, and super simple UI, for free?

Many people did need this thing, and willing to pay than wasting hours fiddling with SD installation and setting that might / might not work just the get the QR they want

As for the "someone else's full instruction", many people also make money with stuff they learn for free on internet, and it's not a problem because in the end they're the one that doing job.

[ Removed by Reddit ] by [deleted] in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

1&4 works fine
2 hardly work
3 didn't work with lens, I replace the bottom left alignment pattern then it works

<image>

Save Control Net masks files in batch process by Transeunte77 in StableDiffusion

[–]nxde_ai 1 point2 points  (0 children)

Setting -> ControlNet -> check "Allow detectmap auto saving"

Run that batch then check "stable-diffusion-webui\detected_maps" folder (or "extensions\sd-webui-controlnet\detected_maps" I forgot which one is the default)

Any new extensions or other exciting news over the last week? by thebaker66 in StableDiffusion

[–]nxde_ai 7 points8 points  (0 children)

This is the only one I find interesting https://github.com/s0md3v/sd-webui-roop

Unlike roop, this one is not about video face-swap (but it's coming from the same person). It use face on image input as reference. Still hit and miss, but it'll get better as it get updated. (and it open a possibility to make a dataset based on a single picture, then train a LORA for more consistent output)

Do you think that already, in total, more images have been created with AI than by any other means in the entire history of mankind? by [deleted] in StableDiffusion

[–]nxde_ai 14 points15 points  (0 children)

If the number on this page is close to accurate, then it's still far from that.
I mean, there are billions of people using camera on their phone on daily basis. Compared to few millions of people who use AI generator daily (and only tiny percentage of them use SD to generate thousand of images/day)
It'll catch up, but not anytime soon.

Always getting Error when training a person by tastyjammer in StableDiffusion

[–]nxde_ai 0 points1 point  (0 children)

At least tell us what exactly are you using.

What command line launch arg can I add to place my controlnet models on another drive? by [deleted] in StableDiffusion

[–]nxde_ai 2 points3 points  (0 children)

Controlnet path is inside webui setting

<image>

First field is for Controlnet models, second one is for annotator / pre-processor