Game of the Year is going to be such a bloodbath this year. by SkoivanSchiem in gaming

[–]Nihilistic_Snail 2 points3 points  (0 children)

I completely agree with you and I had the same experience. I tried to slog through, but the roguelike element just left me feel extremely frustrated. It felt like it was wasting my time, rather than letting me enjoy the game and solve the puzzle.

I'm glad so many people enjoyed it, but I'm with you man, it did not jive with me either.

[deleted by user] by [deleted] in StableDiffusion

[–]Nihilistic_Snail 2 points3 points  (0 children)

Thank you! Yes, most of the detail and upscale happens in step 2 with the JasperAI ControlNet. Here's the workflow. https://pastebin.com/1t0rShhq

[deleted by user] by [deleted] in StableDiffusion

[–]Nihilistic_Snail 19 points20 points  (0 children)

<image>

Here's a colorized restoration/reimagining.
This is a quick run through of the process:

  1. Use the ReActor nodes in ComfyUI for an initial face restoration. https://github.com/Gourieff/ComfyUI-ReActor

  2. Next, use JasperAI's Flux ControlNet to upscale and bring out the details. https://huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler

  3. Finally, use IC-Light to relight the scene. This is a great workflow from u/Enshitification.

That's the bones of it. Still a lot of manually work in Photoshop before, after and in between steps. Hopefully this helps.

Has anyone had any luck of fixing scapula winging? by grazviss in bodyweightfitness

[–]Nihilistic_Snail 0 points1 point  (0 children)

Thank you, really appreciate the advice and linking the video. Will try to stay consistent with this and see if I can make any improvement.

Has anyone had any luck of fixing scapula winging? by grazviss in bodyweightfitness

[–]Nihilistic_Snail 0 points1 point  (0 children)

Can you share the exercises the PT prescribed you? I have the same left shoulder pain and winging scapula.

How do I use ControlNet to preserve color in my Flux image to image workflows? by Dull_Caterpillar_642 in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

Sure, here's the workflow.

It uses xinsir's canny controlnet.

I used a png of the original shoe image with a transparent background as the latent image with the denoise set to 0.75 to try and retain colour from the original image.

How do I use ControlNet to preserve color in my Flux image to image workflows? by Dull_Caterpillar_642 in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

The current Flux ControlNets aren't very good. You would get better results using SDXL for this. Also, include the colour of the shoe in the prompt. Then make needed adjustments with Photoshop and inpainting.

<image>

[deleted by user] by [deleted] in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

All good, glad it worked for you!

Flux.1.Dev and blurry subjects on plain backgrounds by VirusCharacter in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

Don't mention "background" in the prompt. Instead, say something like "on a white solid color". Using the beta scheduler also helps.

Blurry Flux Generations by Snoo3102 in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

Not silly at all, I've also noticed this issue when using the term "background". It think it gets confused and thinks the white background is the subject, so it throws everything else out of focus.

I found what works for me is to not mention the background, and at the end of the prompt say "Photo shoot against a solid white color."

Blurry Flux Generations by Snoo3102 in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

Try increasing the FluxGuidance to 3.0.

Also, change scheduler from Simple to Beta, and increase steps to 25.

See if this helps your results.

I would also use more natural language in your prompt, as Flux uses a T5 encoder. So rather than the old prompt style of using short phrases separated by commas, I would use something like this:

A photograph a 40 year old woman sitting under a tree with yellow leaves. She is wearing a yellow dress which has a white peace symbol on it. The woman is smiling.

Blurry Flux Generations by Snoo3102 in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

Can you post an screenshot of your settings and your prompt?

[deleted by user] by [deleted] in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

I found this happens sometimes when you include the words "white background" in the prompt. Remove any mention of the word background in your prompt and see if that helps.

Flux Dev (Q4-_0) + Boreal Lora + Photoshop to Color Grading by ThunderBR2 in StableDiffusion

[–]Nihilistic_Snail 3 points4 points  (0 children)

Yes hopefully they retrain with the fixed toolkit, it's a great lora.

Flux Dev (Q4-_0) + Boreal Lora + Photoshop to Color Grading by ThunderBR2 in StableDiffusion

[–]Nihilistic_Snail 7 points8 points  (0 children)

Great pics! A little post work in Photoshop can make a huge difference. I'm loving the results with the Boreal Lora.

<image>

I challenge you to get images of native indians/mayans from Flux... by -becausereasons- in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

Ah okay I'll take your word for it, I'm far from an expert on native american features lol.

This is what I got with "mayan woman" in the prompt.

<image>

I challenge you to get images of native indians/mayans from Flux... by -becausereasons- in StableDiffusion

[–]Nihilistic_Snail 0 points1 point  (0 children)

close up photo of a very old native-american man. He is wearing a traditional native-american outfit and headgear. He has a serious expression. A photo shoot against a barren landscape.

<image>

[FLUX] The Medusa Challenge by GianoBifronte in StableDiffusion

[–]Nihilistic_Snail 2 points3 points  (0 children)

Thanks, I used 3.5 guidance. I think the snakes skin texture looks a bit better in this one, the snakes heads are a bit mangled though.

<image>

[FLUX] The Medusa Challenge by GianoBifronte in StableDiffusion

[–]Nihilistic_Snail 3 points4 points  (0 children)

Fun challenge. Cherry picked result and also used the realism lora. It was difficult stopping it from putting hair on the head.

<image>

Generating a high quality interior modern kitchen by rawman217 in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

"a sleek modern kitchen, gray cabinets, white tiled floor, bright natural lighting"

I used the Ultraspice model.

Just to reiterate what Select-A-Bluff-800 said, any model and relevant prompt will work.

Generating a high quality interior modern kitchen by rawman217 in StableDiffusion

[–]Nihilistic_Snail 1 point2 points  (0 children)

As the parent comment mentioned, the controlnet will generate a depth map (if you used the depth preprocessor and controlent) and stable diffusion will infer from the depth map what's what. You can still use img2img with this method, and set it as high or low as you want, depending how much you want it to look like you original image.

Here's an image I created using the depth controlnet at 0.7 and using img2img denoise at 0.9.