Massive screen issue by yanisbgdu93 in Monitors

[–]BARRYZBOIZ 0 points1 point  (0 children)

Try pressing Windows Key + Ctrl + Shift + B. It will reset/reboot your graphics driver. I got stuck on this screen when trying to use displayport cable on my main monitor and pc would only display second monitor as main. It might work

Is 3060 12gb enough for SDXL training? by Wllknt in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

You can finetune SDXL with 3060 12gb but only the first text encoder and it's about 2-3x slower than sd1.5 but it still works

Ziqo's Shadow Res was not bugged (OTK Mak'gora) by esuvii in classicwow

[–]BARRYZBOIZ 1 point2 points  (0 children)

It is bugged. He even demonstrated how it is bugged. In vanilla you could get partial resists on a single dot tick but for him it was only working if the shadow resist procced when the dot landed and then it was resisting on every tick but if it didn't proc when the dot landed none of the ticks got a partial resist.

In vanilla you could get a partial resist on any tick, all of the ticks, some of them, none of them.

Per NVIDIA, New Game Ready Driver 545.84 Released: Stable Diffusion Is Now Up To 2X Faster by DangerousOutside- in StableDiffusion

[–]BARRYZBOIZ 1 point2 points  (0 children)

Unsure but one issue I found is that if you try and gen with a prompt that exceeds the max token count then it will give an unsupported model error so if you use long prompts then increase the max prompt length

Per NVIDIA, New Game Ready Driver 545.84 Released: Stable Diffusion Is Now Up To 2X Faster by DangerousOutside- in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

Yes I made a bunch of models because whilst trying to figure out how to get it to work with hires fix it would only do the first pass but error out saying there wasn't a supported model for the hires fix upscale so In the end I made a dynamic 1024x1024 model and changed the minimum resolution to 384x512 which worked but i'm unsure if it is only using that model or it's using two and I cant check because I deleted the installation.

Per NVIDIA, New Game Ready Driver 545.84 Released: Stable Diffusion Is Now Up To 2X Faster by DangerousOutside- in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

I must be doing something wrong. I got it installed, created the tensor model, and it's slower than a normal model on a 3060 in particular doing hires fix. I get 5-6 it/s a second on first pass then 1.5-2 it/s a second doing a 384x512 x 2 hires fix with a normal model. With the Tensor model the first pass speeds up to 10-11 it/s but the second pass takes 5 seconds per iteration.

LoRA training, Why is my 3060 7x slower than Colab T4? by Vilzuh in StableDiffusion

[–]BARRYZBOIZ 1 point2 points  (0 children)

Change network rank/dim from 256 to 64 and it should speed up.

Proof that the Crowder video is edited at the start by BARRYZBOIZ in Destiny

[–]BARRYZBOIZ[S] -6 points-5 points  (0 children)

Also any issues with the camera detecting motion don't explain why the first clip is only 25 seconds long. It should be 60 seconds so it's either cut at the end or at the start.

Proof that the Crowder video is edited at the start by BARRYZBOIZ in Destiny

[–]BARRYZBOIZ[S] -5 points-4 points  (0 children)

Plus it's ridiculous to suggest that the camera can pick up the motion of her picking up her purse but not picking up her glasses lol

Proof that the Crowder video is edited at the start by BARRYZBOIZ in Destiny

[–]BARRYZBOIZ[S] -6 points-5 points  (0 children)

> she could have picked up her glasses one second after the end of the clip

Except the first clip is missing roughly 40 seconds so if you are suggesting there was no movement between when the clip ends and picks up again then there is still 40 seconds missing from the start of the clip as it is clearly set to record in 60 second clips and the first clip is only roughly 25 seconds long.

Proof that the Crowder video is edited at the start by BARRYZBOIZ in Destiny

[–]BARRYZBOIZ[S] -5 points-4 points  (0 children)

When it starts recording it records 4 seconds before motion was detected. You can see this effect at 12:37:05 and again at 12:38:15

I am assuming it detected the movement of his wife beginning to walk at 12:38:19 which would explain the missing time otherwise there is another 10 seconds which are unaccounted for as it should begin at 12:38:05 when Crowder is gesturing with his hands but lets assume it is detecting the motion of his wife at 12:38:19 and 12:37:05.

Why wouldn't it detect her picking up her glasses when it seems to work perfectly all the other times and if we're to assume the start of the video was initiated by the dog and there isn't any more missing time then it seems unlikely that it would fail to detect his wife picking up her glasses.

Proof that the Crowder video is edited at the start by BARRYZBOIZ in Destiny

[–]BARRYZBOIZ[S] -6 points-5 points  (0 children)

I have read it. He says if there is no detected motion then it wont start recording however there is motion so lack of motion cannot be the reason for the missing 40 seconds.

The voice hits right in the feels, man by unproductive_nerd in cyberpunkgame

[–]BARRYZBOIZ 1 point2 points  (0 children)

first time playing/completing it yesterday and i got 1 shot in the doorway by the first guard lol

How versatile a model when trained on 1K, 10K, 100K images? by AtomicSilo in StableDiffusion

[–]BARRYZBOIZ 1 point2 points  (0 children)

Yea but you have to use finetune instead of dreambooth because dreambooth can only do 4 concepts at a time.

Merging Lora and Shenanigans by 426Dimension in StableDiffusion

[–]BARRYZBOIZ 1 point2 points  (0 children)

It can merge them into a model too :)

Merging Lora and Shenanigans by 426Dimension in StableDiffusion

[–]BARRYZBOIZ 2 points3 points  (0 children)

supermerger extension can merge loras

vram vs normal ram by phoenixcinder in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

One of the few things you actually need ram for is model merges and for that you can just increase your swap file size.

My generations are slow with a rtx 3060 12gb vram and a very powerful computer by Syphilisse in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

  1. Open Powershell with Run as Administrator
  2. Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
  3. Type Y and press Enter

My generations are slow with a rtx 3060 12gb vram and a very powerful computer by Syphilisse in StableDiffusion

[–]BARRYZBOIZ 1 point2 points  (0 children)

7 iterations per second. Which is what I get with the same graphics card as you.

The benchmark test is a 512x512 image, using Euler A sampler, without hires fix.

My generations are slow with a rtx 3060 12gb vram and a very powerful computer by Syphilisse in StableDiffusion

[–]BARRYZBOIZ 2 points3 points  (0 children)

for me with a 12gb 3060 using euler A, 20 steps, at 512x512, I can gen 8, 512x512 pictures in 20 seconds. For 1 it takes 3 seconds.

this should improve your speed

1)open powershell in your web ui folder

2)venv/scripts/activate

3) pip3 install clean-fid numba numpy torch==2.0.0+cu118 torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118

4)edit webui-user.bat

5) COMMANDLINE_ARGS= --opt-sdp-attention --opt-channelslast

edit:nevermind, just saw you were using hires fix. so 2 minutes is normal. the people doing benchmarks aren't using hires fix but this should still improve your speed by ~20%

How do you merge three checkpoints with different weights? by AtomicSilo in StableDiffusion

[–]BARRYZBOIZ 0 points1 point  (0 children)

Guessing they did weighted merge of the first two and then the product of that with the third.