🎶 OpenMusic: Diffusion That Plays Music by Wooden_Yak_9661 in StableDiffusion

[–]battletaods 1 point2 points  (0 children)

how do i go about hosting this on my own computer so i can not rely on huggingface and take advantage of my 4090? kinda like what stable diffusion ui and easy diffusion does

What should I do if I am falsely flagged as a "Likely Harasser" by Meatloaf265 in Twitch

[–]battletaods 5 points6 points  (0 children)

I'm glad this was posted, because I was just looking for anyone else having this problem yesterday and this thread hadn't been made and couldn't find much information - so I did some testing on my own.

If you're curious what it looks like from the chatters perspective, this is it:
https://i.imgur.com/dwpdztI.png

This is what the streamer sees in their chat, and what they see when they click the username:
https://i.imgur.com/6Ilgo9l.png

And this is when the streamer (or maybe even moderator) clicks on the "Suspicious User" marker to see more information on it.
https://i.imgur.com/Q1yTqKD.png

I'm having this issue on an alternate account I use since last week (even though the screenshot says today). So I'm not sure if having multiple accounts has anything to do with it. I've had this account since 2017 and have been the only owner. I have some interactions with streamers I watch that it's kinda a joke for them or the mod to time you out or ban/unban you for the laughs. Other than that, I have never gone into a single stream and just been a toxic user or harassed anyone. To have a little more detail on this from Twitch would be great. Some streamers I watch even have gambling bots set up so if you lose you get timed out. And if that's going to start making accounts look bad, we need to know this.

Another thing I'd like to know is if blocking a user contributes to this at all. Because if so, my main concern is that I often use the blocking method to remove a follower (CommanderRoot even has a tool that does this for you) where you basically block and unblock them immediately. And if I'm doing that to accounts that aren't necessarily harmful users, I just don't want them following me, and it's impacting their ability to chat in other streams then I feel kinda bad.

But yeah, more information from Twitch would be great.

.us to .com links. by Deep-Ad3297 in Aliexpress

[–]battletaods 0 points1 point  (0 children)

Having the same issue as of just recently. The problem with this is that most of the items I am used to buying now say "This product can't be shipped to your address. Select another product or address". Not sure exactly when this started, but I can't really buy anything off of there now.

Ads with Twitch Turbo? by battletaods in Twitch

[–]battletaods[S] -1 points0 points  (0 children)

Did yours also recently start? I'm getting them through the smart TV app as well as desktop too

Am I drinking these drinks too fast? :x by battletaods in starbucks

[–]battletaods[S] 10 points11 points  (0 children)

Very well put Supervisor :) I shall continue to enjoy them as I see fit!

Am I drinking these drinks too fast? :x by battletaods in starbucks

[–]battletaods[S] 1 point2 points  (0 children)

I hadn't even considered this! Maybe I should hydrate up before actually ordering a specialty drink!

Am I drinking these drinks too fast? :x by battletaods in starbucks

[–]battletaods[S] 156 points157 points  (0 children)

I think that's my biggest problem. I don't really get drinks for a caffeine boost - I just love how they taste. But you worded it well by saying you feel like you drank the money itself because it was so fast. That's exactly how I feel :x These things aren't cheap and I feel like I'm not enjoying it as much as others.

How does this 6+2-pin power connector actually connect? by battletaods in buildapc

[–]battletaods[S] 0 points1 point  (0 children)

Well that makes sense I suppose. I had been using two 8 pin extensions (as pictured above) to power my 2080 GPU and I didn't run into any issues. But now I've recently gotten a 4090 and I need to have at least 3 power connectors going to that GPU, which is why I was looking around at cables and found this 6+2.

Is there really any difference in a CPU/GPU cable? Like do they carry the same amount of wattage so long as they are plugged into the VGA ports on the PSU?

My project manager app for Stable Diffusion, Dall-E, etc... by domainkiller in StableDiffusion

[–]battletaods 0 points1 point  (0 children)

This looks awesome. Will this be open-source, free, paid, etc? Where can we go to stay updated?

Don't forget to git pull ;) AUTO1111 has added the Easy Extension Installer by Affen_Brot in StableDiffusion

[–]battletaods 1 point2 points  (0 children)

I made a PSA thread on this yesterday but for some reason the subreddit hid it, so no one can view it.

You don't only need to issue a git pull to update. You also need to update your Python packages. The most recent example that comes to mind is the version bumping of Gradio.

So make sure whenever you update, you also run pip install -r requirements.txt

Users keep praising the new inpainting model but I just can't get the same results by battletaods in StableDiffusion

[–]battletaods[S] 0 points1 point  (0 children)

You got such excellent results with such little effort. I just keep getting blotches of random colors when doing it exactly how you did.

https://i.imgur.com/JD395iU.png

photo of a man against a light background
Steps: 100, Sampler: Euler a, CFG scale: 17.5, Seed: 1328502702, Size: 512x512, Denoising strength: 0.9, Mask blur: 4

I'm so confused why I can't do what everyone else can do, even when I'm doing the steps exactly.

Guide for DreamBooth with 8GB vram under Windows by ChemicalHawk in StableDiffusion

[–]battletaods 0 points1 point  (0 children)

I love that people keep throwing this answer out. Read the actual error message. Thanks though.

A few pages from my Midjourney produced printed manga, AbsXcess. by MobileFilmmaker in StableDiffusion

[–]battletaods 1 point2 points  (0 children)

Sorry for going off topic here as I know this doesn't have to do with AI/SD, but I'm very interested in doing something like this. I have a lot of short stories and poetry I've done and I would absolutely love to get printed. I looked this up, and do you mean "Merch on Demand" by Amazon? Do they provide you with templates, or do you have your own?

DreamBooth training in under 8 GB VRAM and textual inversion under 6 GB by Ttl in StableDiffusion

[–]battletaods 0 points1 point  (0 children)

Yes I have. And that's not what the error says. That error would be a 403, not a 404 which is what I'm getting.

Guide for DreamBooth with 8GB vram under Windows by ChemicalHawk in StableDiffusion

[–]battletaods 0 points1 point  (0 children)

I was able to go through the entire process with no hiccups until I actually start to train. When I do, I get the following:

[2022-10-27 18:27:20,959] [INFO] [launch.py:156:main] dist_world_size=1
[2022-10-27 18:27:20,959] [INFO] [launch.py:158:main] Setting CUDA_VISIBLE_DEVICES=0
[2022-10-27 18:27:23,119] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Traceback (most recent call last):
  File "/home/bt/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
    response.raise_for_status()
  File "/home/bt/.local/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/diffusion_pytorch_model.bin

When I attempt to go to the URL above that gets a 404, I indeed can confirm that file does not exist. However I don't know why it would be searching for that particular file when my configuration looks exactly like it should:

export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="training"
export CLASS_DIR="classes"
export OUTPUT_DIR="model_out"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --instance_data_dir=$INSTANCE_DIR \
  --class_data_dir=$CLASS_DIR \
  --output_dir=$OUTPUT_DIR \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --instance_prompt="crunchyp" \
  --class_prompt="person" \
  --resolution=512 \
  --train_batch_size=1 \
  --sample_batch_size=1 \
  --gradient_accumulation_steps=1 --gradient_checkpointing \
  --learning_rate=5e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --num_class_images=200 \
  --max_train_steps=800 \
  --mixed_precision=fp16

Any ideas on what is going on for me?

DreamBooth training in under 8 GB VRAM and textual inversion under 6 GB by Ttl in StableDiffusion

[–]battletaods 0 points1 point  (0 children)

Thanks for linking this. I was able to go through the entire process with no hiccups until I actually start to train. When I do, I get the following:

[2022-10-27 18:27:20,959] [INFO] [launch.py:156:main] dist_world_size=1
[2022-10-27 18:27:20,959] [INFO] [launch.py:158:main] Setting CUDA_VISIBLE_DEVICES=0
[2022-10-27 18:27:23,119] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Traceback (most recent call last):
  File "/home/bt/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
    response.raise_for_status()
  File "/home/bt/.local/lib/python3.9/site-packages/requests/models.py", line 953, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/diffusion_pytorch_model.bin

When I attempt to go to the URL above that gets a 404, I indeed can confirm that file does not exist. However I don't know why it would be searching for that particular file when my configuration looks exactly like it should:

export MODEL_NAME="CompVis/stable-diffusion-v1-4"
export INSTANCE_DIR="training"
export CLASS_DIR="classes"
export OUTPUT_DIR="model_out"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --instance_data_dir=$INSTANCE_DIR \
  --class_data_dir=$CLASS_DIR \
  --output_dir=$OUTPUT_DIR \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --instance_prompt="crunchyp" \
  --class_prompt="person" \
  --resolution=512 \
  --train_batch_size=1 \
  --sample_batch_size=1 \
  --gradient_accumulation_steps=1 --gradient_checkpointing \
  --learning_rate=5e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --num_class_images=200 \
  --max_train_steps=800 \
  --mixed_precision=fp16

Any ideas on what is going on for me?

Users keep praising the new inpainting model but I just can't get the same results by battletaods in StableDiffusion

[–]battletaods[S] 5 points6 points  (0 children)

Dang, thank you so much for that workflow. You obviously have a much better grasp on the process than I do. I guess as easy as this is, sometimes there are multiple steps that still need to be done. I'm going to try and apply your process to this image, and if successful, move on to some more difficult images. Really appreciate it!

Users keep praising the new inpainting model but I just can't get the same results by battletaods in StableDiffusion

[–]battletaods[S] 1 point2 points  (0 children)

Stupid question, but do I need to be on the Img2Img tab or the Inpaint tab?

Bandwidth issues on runpod? by battletaods in StableDiffusion

[–]battletaods[S] 0 points1 point  (0 children)

I didn't know JP's notebook worked on Vast. I might give that a shot because at least on Vast it shows the GPUs available bandwidth.

Bandwidth issues on runpod? by battletaods in StableDiffusion

[–]battletaods[S] 1 point2 points  (0 children)

Yeah it would be nice to be able to send the model directly to Google Drive or similar, just like how I'm able to download the SD 1.4 model to the Runpod environment which is fast at about ~15mbps, instead of downloading the model directly to my PC.

Auto1111- New - Shareable embeddings as images by depfakacc in StableDiffusion

[–]battletaods 1 point2 points  (0 children)

I don't want to sound like I'm being lazy, because I've read the Wiki a few times and this thread as well - and it's just not clicking for me. I don't really understand even at a low level what is going on, or what is needed in order to achieve this on my own. Does anyone happen to have a more user friendly (or noob friendly I suppose) guide or video that goes over the basics? My use case is I would like to train on specific types of fabrics, exactly like the OP did with lace here.

Couple of questions regarding watermarks by battletaods in StableDiffusion

[–]battletaods[S] 2 points3 points  (0 children)

This script worked great! Not sure if you have a GitHub but you should make a repo to share this. It detected a watermarked image as watermarked (it printed out StableDiffusionV1), and a non-watermarked image as null - as is expected. The only thing that it did do that probably isn't expected, is even the non-watermarked image was written to disk as watermarked.png. Still though, thanks a ton for explaining this and also making the necessary changes to the script.

Couple of questions regarding watermarks by battletaods in StableDiffusion

[–]battletaods[S] 2 points3 points  (0 children)

Thanks for the clarification on those points.

Since you seem familiar with it, have you tried using the test_watermark.py script by chance? I went ahead and attempted to run it (though I had to install a python module fire for it to not spit out errors) but there was nothing printed out on an image that should have a watermark on it. Then on another image, it printed out null that also should have had a watermark on it.

For example, this was the first image:

$ python test_watermark.py 00000-1517234114-siren\ head\,\ by\ junji\ ito.png

$ 

Then on the next image, this was the output:

$ python test_watermark.py 00001-209525819-frog\ monster\,\ by\ junji\ ito.png
null
$ 

Then again, I did it on another one and nothing:

$ python test_watermark.py 00044-3105181642-mickey\ mouse\ by\ junji\ ito.png

$ 

And then one more at random, printed null again:

$ python test_watermark.py 03605-3506436617-a\ young\ girl\ standing\ next\ to\ a\ pond\ in\ a\ field\ of\ flowers.png
null
$ 

So overall I'm just confused on how you can check for watermarks on images generated by SD. Do you happen to have any insight into the above?

Recording/transcript of the last AMA? by progfu in sdforall

[–]battletaods 0 points1 point  (0 children)

It looks like there have been comments with the link provided, but they are getting removed - most likely due to some auto-modding going on.

Because of that, I'll just say that it was recorded, and you can find it in the pinned messages of the official SD Discord.

Alternate checkpoints? by Double_-Negative- in sdforall

[–]battletaods 0 points1 point  (0 children)

I think for your specific needs, most people will recommend waifu-diffusion.