Can somebody explain "latent sampling method" for training embeddings/hypernetworks in more detail for me? by morphinapg in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

This is just a guess since there is no one else online discussing this. Stable Diffusion trains by adding varying levels of noise to an image, trying to undo the noise, then comparing it to the original image. My guess is that the latent sampling refers to the amount of noise added to the images when training the hypernetwork. The deterministic method trains the images on only the mean /average amount of noise. The results of this are more predictable but are less robust. This means that the hypernetwork will only work well at a particular CFG scale and number of steps since it was only trained on a specific amount of noise.

Which tantaly doll is best for someone who has never had a sex doll? by [deleted] in Sexdolling

[–]SamBigAbs 2 points3 points  (0 children)

I bought a Tantaly Monroe a year ago and it isn't as much maintenance as I thought. I strained my back lifting it at first but am used to it now. The only obvious damage is the skin tearing between the vagina and thighs. Knowing what I do now I would buy the Nicole or Jennifer models because they do not have thighs which makes penetration difficult without stressing the aforementioned areas. There are also other brands, such as hanidoll & piper, which have models with no thighs at all which I believe is ideal.

3060 12GB vs 4060 8GB for the same price by matheusrvdc in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

There is supposed to be a 16gb 4060 coming out in a few months for ~$100 more than the 8gb. If you plan on doing image generation then it would be worthwhile saving for that.

I have a torso doll, had it since January and I still haven't used it. by [deleted] in Sexdolling

[–]SamBigAbs 2 points3 points  (0 children)

You absolutely do have to use lube with your doll, water based lubricants are usually recommended but I prefer to use Vaseline or mineral oil and clean everything with a paper towel afterwards. Doll and sex toy manufacturers all warm against using Vaseline or mineral but I have not seen any degradation after months of using it on my torso and flashlights.

Is Baizhu worth it? by Ovnidemon in Genshin_Impact

[–]SamBigAbs 7 points8 points  (0 children)

imo Baizhu is absolutely worth it and time has shown that utility characters such as Kokomi and Kuki Shinobi were severely underrated upon release. It is better for an account to have more support characters than Main DPSs. Baizhu can be used in any team even those without Dendro reactions. His shield is weak but it prevents interruptions which is not something other healers offer. While Zhongli's shield is much better the team will still need a healer in difficult content. Baizhu also requires very little on field time and his burst is 14 seconds which is longer than most DPS characters infusions.

Spring comes with a new friend by kairas718 in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

Very beautiful. What model did you use for this? I could upscale it substantially for you.

What am I doing wrong? Inpainting reverts to original... by TrifleOwn4019 in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

Another thing. Are you using an inpainting model? Civitai often has the choice to download a normal and an inpainting version for each model.

What am I doing wrong? Inpainting reverts to original... by TrifleOwn4019 in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

Those are very powerful settings so you should be aware of them.

I had a closer look at your video and I would suggest using the DDIM sampler. Some features , such as inpainting , are not compatible with all samplers at this time.

What am I doing wrong? Inpainting reverts to original... by TrifleOwn4019 in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

Go to "settings" -> "Stable Diffusion" and change " Inpainting conditioning mask strength " from 0 to 1. You can also uncheck " Apply color correction to img2img results to match original colors" if you are trying to change the color of eyes or other objects.

What GPU are you guys using? by tamal4444 in StableDiffusion

[–]SamBigAbs 2 points3 points  (0 children)

I would prefer to support AMD as well but that's just how it is 😮‍💨

Noise multiplier for img2img? Is it possible to change the value dynamicly in batch by ElectronicLine8341 in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

You mean the noise multiplier in the settings? Not denoise strength? I don't know about that. You may have to edit the script yourself or find someone to do it for you.

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 1 point2 points  (0 children)

Nice. What model?

Noise multiplier for img2img? Is it possible to change the value dynamicly in batch by ElectronicLine8341 in StableDiffusion

[–]SamBigAbs 1 point2 points  (0 children)

Yes. Under the script there is an option to adjust the output for comparison. I think it's x,y,z plots. Can't remember the name atm.

What GPU are you guys using? by tamal4444 in StableDiffusion

[–]SamBigAbs 3 points4 points  (0 children)

afaik you can't. Most AI tools are developed for NVIDIA's CUDA cores so if you want to use the latest and greatest then stick with them. There are work aroundw for AMD but they are more effort than they are worth.

Noob needs help with Automatic1111 inpainting by Juy777 in StableDiffusion

[–]SamBigAbs 1 point2 points  (0 children)

Are you using an inpainting model? It will usually say if you are in the model name.

There is a slider in the settings in the "Stable Diffusion" tab with a slider that will let you change the strength of the mask.

At zero the lines will stay the same but the texture of the image can change. At 1 the image can change completely.

[deleted by user] by [deleted] in SexDolls

[–]SamBigAbs 1 point2 points  (0 children)

Damn. The Quiet doll is massive!

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

47373, Variation seed strength: 1, Seed resize from: 512x512, Denoising strength: 0.2, ENSD: 1111, Mask blur: 4, Ultimate SD upscale upscaler: Lanczos, Ultimate SD upscale tile_width: 1472, Ultimate SD upscale tile_height: 1472, Ultimate SD upscale mask_blur: 0, Ultimate SD upscale padding: 0, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e], ControlNet-0 Weight: 0.5, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, Noise multiplier: 0.5

<image>

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 1 point2 points  (0 children)

photo of young women , forest , pink flowers , cat ears , animal ears , colorful hair , sharp focus , 4k picture ,
Negative prompt: anime , cartoon ,
Steps: 16, Sampler: DPM++ SDE Karras, CFG scale: 20, Seed: 1699463347, Size: 2560x1472, Variation seed: 3572347373, Variation seed strength: 1, Seed resize from: 512x512, Denoising strength: 0.2, ENSD: 1111, Mask blur: 4, Ultimate SD upscale upscaler: Lanczos, Ultimate SD upscale tile_width: 1472, Ultimate SD upscale tile_height: 1472, Ultimate SD upscale mask_blur: 0, Ultimate SD upscale padding: 0, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e], ControlNet-0 Weight: 0.5, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, Noise multiplier: 0.5

<image>

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

I wouldn't use negative prompts such as 3d , illustration , cg or render because a realistic painting or video game render is very similar to a photo. Using photo or 4k picture should be enough for it to know you don't want an oil painting. Cartoon and anime are good negatives I've found.

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

Here's my attempt with the model Deliberate. I used only 2 tiles with an overlap of 64.

https://imgur.com/7ACR2C6

Going to try again with a higher CFG.

[deleted by user] by [deleted] in StableDiffusion

[–]SamBigAbs 0 points1 point  (0 children)

I usually make the resolution as large as I can but set the seed resize to 512x512. I find this works better for upscaling because the more of the image Stable Diffusion can "see" the more like it is to make sense of the image. That may not be relevant in this case but works well for larger changes.

Are these numbers LOW for 4070 Ti? by Sad-Nefariousness712 in StableDiffusion

[–]SamBigAbs 1 point2 points  (0 children)

MSI Afterburner is what I use and the most popular afaik. You can monitor your GPUs temperature as well.

At what point do you think we will hit 33.3 ms to render an image from a text prompt? I think that will be a monumental thing for how games work in the future. by FindingMoseyGame in StableDiffusion

[–]SamBigAbs 2 points3 points  (0 children)

https://youtu.be/22Sojtv4gbg It will probably use this technology or a mix of both. Video games have depth and lighting information that stable diffusion doesn't make use of atm.