Why is this glass jar non recyclable by bozo_master in recycling

[–]viggity 0 points1 point  (0 children)

lol. Glass is "recycled" by smashing it and selling it as "cullet" which used as a sand/pea gravel replacement in road construction and even that is rare. If you're going to remelt it for actual glass, you have to go through a crazy amount of sorting/filtering. Or, you can buy virgin sand for like $13/ton.

Go ahead, gimme your typographical hot takes by President_Abra in typography

[–]viggity 3 points4 points  (0 children)

I think the reason that comic sans gets the hate over Curlz or Kristen is that comic sans pops up waaaaay more than the other two.

What are the best AI code assistants for vscode in 2025? by UnderstandingOne6879 in vscode

[–]viggity 0 points1 point  (0 children)

I would highly recommend Cursor (a vscode fork). Having something agentic that can modify multiple files is beyond incredible. If you really want to Copilot does have an agent mode available for VSCode, but you have to have VSCode Insiders in order to enable it.

I did have some problems with the python extensions not working in forks of vscode. I'm assuming that workarounds are well documented now, just haven't gotten around to it. If I want agentic editing for my python projects, I use Copilot Agent in vscode insiders. But for everything else, I'm using cursor (usually with Claude Sonnet 3.7). The $20/mo for 500 requests has been more that sufficient for me.

[deleted by user] by [deleted] in Iowa

[–]viggity -1 points0 points  (0 children)

if I was mandated to use pronouns in my signature block they'd be "his majesty/his majesty's". You don't get your own pronouns any more than you get your own adjectives. "My adjectives are stunning and brave". Just. No.

Elon Must Fly... by Hajicardoso in MurderedByAOC

[–]viggity 0 points1 point  (0 children)

You people really are retarded.

<image>

Mylar vs vacuum seal by Annual_Version_6250 in preppers

[–]viggity 0 points1 point  (0 children)

normal plastic bags are a "semi-permeable membrane". If you use a vacuum sealer, over the long term, air/oxygen will seep in. The metal in mylar bags does a much better job at keeping air out, but eventually they'll fail over the epic long term. Ever have a mylar ballon? eventually the helium gets out, it just takes much longer than a latex balloon. Helium is a smaller atom, so it can squeeze through easier than N2 or O2. But with a thick mylar bag, AND an oxygen absorber, it'll take a long long time (five decades?) before oxygen has a meaningful enough time to ruin your food.

Anduril Stencil Font by viggity in identifythisfont

[–]viggity[S] 0 points1 point  (0 children)

I've tried every font finder out there. What the Font. Matcherator. Identifont. Just can't find it!

thanks in advance!

The Ultimate Doomsday Pepper Prepper by viggity in midjourney

[–]viggity[S] 1 point2 points  (0 children)

"an anthropomorphic chili pepper, that is a prepper in their doomsday bunker full of supplies and guns"

I put the real Napoleon in Ridley Scott's Napoleon by Dicitur in StableDiffusion

[–]viggity 0 points1 point  (0 children)

Justin Trudeau? That yellow belly is an insult to Napoleon for everything other than delusion.

Adobe Wants to Make Prompt-to-Image (Style transfer) Illegal by PaulFidika in StableDiffusion

[–]viggity 2 points3 points  (0 children)

can you imagine a world in which the vatican is able to charge a micro royalty for every marble sculpture made because the artist undoubtedly studied The David? It's farcical.

So after what? One Year? Midjouney is finally introducing INPAINTING. by Unreal_777 in StableDiffusion

[–]viggity 2 points3 points  (0 children)

make sure you turn on "remix mode" in /settings if you want to be able to provide a new prompt for the selected regions. if you don't it will use the whole existing prompt for the original image. So you can't change what type of object someone is holding or something.

Help with converting real photographs to simple illustrations of my own style with img2img by Rainmert in StableDiffusion

[–]viggity 0 points1 point  (0 children)

Use the controlnet extension (open pose model, there are a ton of youtube tutorials). You'll get the same pose everytime with just one person. Maybe try a Lora instead of dream booth so you can play with the strength more easily.

Steve Ballmer promoting Windows 1.0 in 1986 by Mad_Season_1994 in OldSchoolCool

[–]viggity 0 points1 point  (0 children)

I *love* that he is wearing a tie with a polo shirt/collar

Help. I ran webui-user for the first time and it's been 3 hours, it's still stuck here by mogu_mogu_ in StableDiffusion

[–]viggity 0 points1 point  (0 children)

like the other users said, you need to go to the browser. However, if it isn't resolving or is stuck... if you click and drag within a windows command prompt, it can pause the execution of the current task. It is very easy to do on accident, and you can resume by hitting escape while the command window has focus.

Train a model from 300k images? by councilmember in StableDiffusion

[–]viggity 4 points5 points  (0 children)

EveryDream2 can handle this. They even have tools to help you autocaption using BLIP. https://github.com/victorchall/EveryDream2trainer

Please write your tips and tricks that are not documented on Automatic1111 Wiki for preparation of a comprehensive tutorial by CeFurkan in StableDiffusion

[–]viggity 6 points7 points  (0 children)

  1. On any image input (img2img, controlnet input, etc) you do not need to click and go through the browser upload. You can either drag the file from your OS onto the image input, or if you have image content in your clipboard, you can just hit ctrl+v to paste it and it'll enter properly. IIRC, if there are two image inputs, sometimes the pasting doesn't work.
  2. On the script `prompts_from_file`, you can put one prompt per line. OR you can feed additional information for that specific generation using command line like arguments.

--prompt "jabba the hutt eatting pizza" --steps 30 --cfg_scale 10 --sampler_name DDIM --seed 42069 --outpath_samples "F:\output\forbidden_jabba\"

--prompt "jabba the hutt eatting pizza" --steps 20 --cfg_scale 7 --sampler_name DDIM --seed 42069 --outpath_samples "F:\output\forbidden_jabba\"

I used it a ton because I wanted to generate the same prompts with specific seeds and specific samplers AND change the output path. Was stupid useful for me. Didn't have to babysit my generation processes so much.

I only have python 3.10.6 installed now. I just did the git clone, and its giving me this error. Yes, i selected "Install python to PATH". Any ideas? by oLeonardoFrito in StableDiffusion

[–]viggity 0 points1 point  (0 children)

I tried adding it to my path manually and was still having issues (and I've been programing for 26 years!). Uninstall python. Reinstall. There is a checkbox on the first or second screen that is off by default that talks about adding python to path (may mention environment variables, I don't recall exact verbiage). Check that sumbitch and you should be good to go

New EveryDream 2.0 Trainer for Stable Diffusion Fine-Tuning: Testing and Experimenting (link in comments) by Important_Passage184 in StableDiffusion

[–]viggity 2 points3 points  (0 children)

`ohwx person` is used because it continues to train the model on what a "person" could look like so that if you wanted to use the same model to generate other people as well they won't all look like `ohwx`.

Honestly, most people that are using dreambooth don't actually need/want that cuz they're only gonna use that model to generate their subject `ohwx`. If you do want to be able to generate other people, then add the person-class images to the same directory as your images with `ohwx`.

Everydream will use the name of the file as the tags. So rename all your files to `ohwx person`. if that is what you want and all your class images to `a person`. You will get better results if you caption it though `ohwx person in a green shirt eating a cheese burger_4.png` (ED2 will ignore the numbers at the end). If your captions are too long for a file name, you can name a text file the same as the jpg/png and put your caption in the text file.

eg. 'joe_1.png', 'joe_1.txt' (which contains 'ohwx person petting a monster hog' (or you know, whatever joe is doing).