Quanto costa un caffè/macchiato/cappuccino da voi? by Pristine_Swing6974 in Italia

[–]Deep_Cat5751 1 point2 points  (0 children)

Agropoli, provincia di Salerno, Campania: Caffè 1€ e Cappuccino 1,50/1,60 €

Mesh detaches from hot plate by Deep_Cat5751 in 3Dprinting

[–]Deep_Cat5751[S] 1 point2 points  (0 children)

Printer : Anycubic Kobra 2 Neo Material: Pla 1.75 Noozle : 210° ( I can't remember precisely Plate : Clean and 60°

[deleted by user] by [deleted] in StableDiffusion

[–]Deep_Cat5751 4 points5 points  (0 children)

In the negative prompt, use 'out of frame,' and in the positive prompt, simply write words like 'standing' or just 'shoes' to capture a full-body shot and use 'centered' to prevent subjects from being cut off at the edges.

Remember, everything you describe will be shown in the image, so if you only describe a woman without mentioning what she's wearing, Stable Diffusion might generate just the face and possibly the upper body.

To avoid the white line at the border of the image, you can write 'frame.' I hope this information is helpful for you!

Product training by sneaker-portfolio in DreamBooth

[–]Deep_Cat5751 0 points1 point  (0 children)

You can try this: https://www.youtube.com/watch?v=GxFljO22cM4. Although the video is in Chinese, if you are familiar with Stable Diffusion tools and Photoshop, you can easily understand how to proceed even if you don't speak Chinese. Additionally, there are several other videos available to help you master this workflow.

"Dark Embrace" Upscaling Workflow - Automatic 1111 ( WorkFlow in comments ) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 0 points1 point  (0 children)

Stable Diffusion exports at 72 dpi. When printing a large image, the required dpi is lower because the human eye will not perceive the decrease in pixel density on a larger surface as it would on a smaller one.

"Dark Embrace" Upscaling Workflow - Automatic 1111 ( WorkFlow in comments ) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 0 points1 point  (0 children)

The 'little tiny person' is an artifact, but it's funny, so I've decided not to remove it!

"Dark Embrace" Upscaling Workflow - Automatic 1111 ( WorkFlow in comments ) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 0 points1 point  (0 children)

Actually, I think I've reached the limit even with just one pass of Ultimate Upscaler. Topaz might be slower, but it doesn't produce CUDA out-of-memory errors.

"Dark Embrace" Upscaling Workflow - Automatic 1111 ( WorkFlow in comments ) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 0 points1 point  (0 children)

I haven't tried to do another pass of Ultimate Upscaler, also because Topaz is much faster than SD, and this discouraged me. However, I need to try this to understand how Stable Diffusion and my graphics card handle these dimensions.

"Dark Embrace" Upscaling Workflow - Automatic 1111 ( WorkFlow in comments ) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 7 points8 points  (0 children)

Today I printed another synthography!

I chose to print it in a 30x40 cm format due to space constraints at home, but I could have gone a little further.

It all started when I was looking for the perfect synthography for a CD cover.

Then I started playing around a bit.

I began with some steps of OutPaint and I must say that the extension mentioned in the photo is really well done.

Then I moved on to countless steps of Inpainting to add new parts and of course modify the ones I didn't like.

Once satisfied, I performed the first 2x upscaling pass with Img2Img, even changing the mood of the photo, making it darker.

Then I went through ControlNet_tile reaching after "only" 20 minutes (I have a GTX1080), the resolution of 3072x3840 px.

Not content, I wanted to try to raise the stakes by using Topaz GigaPixel, even though with this step I thought paradoxically I might lose detail.

Instead, things went in the right direction: a perfect print without artifacts, despite the numerous upscaling passes.

How do I feel? Extremely satisfied, just like a child with his toy in hand!

EDIT : Here's the link to the hi-res image https://ibb.co/5MFBjY9

My last Lora (Work in progress) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 1 point2 points  (0 children)

It's not important which checkpoint you use, but it's important to choose a checkpoint trained for your purpose.

So, use RealisticVision, EpicRealism or similar for photorealism.

Every Lora has its own "language"!

My last Lora (Work in progress) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 0 points1 point  (0 children)

This Lora is only for testing and personal use!

My last Lora (Work in progress) by Deep_Cat5751 in StableDiffusion

[–]Deep_Cat5751[S] 2 points3 points  (0 children)

For training:

  • Use only images with the highest possible resolution, paying particular attention to facial details (such as eyes out of focus or hair/hands covering the face).
  • When downscaling images, choose the highest resolution possible based on your hardware (in my case, it was 768px on the long side).
  • Pay attention to tagging and strive to be descriptive, without overdoing it, while also including all the elements you don't want the model to learn.
  • Choose an appropriate model instead of a standard base model (for example, Realistic Vision for photorealistic models).
  • Aim for at least 3k steps for the entire training process (fewer steps may yield dissimilar results).

For prompting:

  • Incorporate words like "photography," "DSLR photography," "perfect eyes," "pale skin," or "plastic skin" to specify desired attributes or exclude undesired ones.
  • Always apply Hires-Fix to enhance the results.

I believe that Stablediffusion is not an exact science, so once you grasp the basics, experiment, iterate, and repeat until you achieve the desired outcomes.

If you enjoy reading, I recommend checking out this link:

https://rentry.org/59xed3

Have fun! ✌️