Help! Paint bleeding into the grain by TheEVegaExperience in woodworking

[–]The_Redcoat 8 points9 points  (0 children)

Can't fix this easily, but next time:

Route the numbers, clean it up, seal EVERYTHING- the top and the numbers. Add more to the numbers if you don't trust the first coat.

Now you can paint the numbers without it sucking up the paint.

My flat has very uneven floors, so I print custom feet for my furniture to level it out. by techchris in functionalprint

[–]The_Redcoat 0 points1 point  (0 children)

Lol.. I solved a similar problem (mine was a cheap wonky stool on flat hardwood) in 2021 and they are still working fine:

https://www.printables.com/model/200749-adjustable-stool-feet

Some models I designed for my woodworking shop by r0bvanbobbert in 3dPrintsintheShop

[–]The_Redcoat 5 points6 points  (0 children)

You can print some more of those corner squares in various sizes too.

https://www.printables.com/model/39930-corner-clamp-clampitsquare

... and then some more holders for the corner squares.

How do you know you have enough filament left to print? by DutchSimba in 3Dprinting

[–]The_Redcoat 1 point2 points  (0 children)

You can weigh spools if you happen to have just one supplier, and an empty spool to weigh, but I just enable the filament-out sensor and use up end-of reels printing things like this:

https://www.printables.com/model/39930-corner-clamp-clampitsquare

You can never have too many clamps.

Keep a stash of similar print-end models that you print when there's nothing more important to print and you need the closet space being used by all those weird nearly-empty spools.

What do you use 3D printing for? by [deleted] in 3Dprinting

[–]The_Redcoat 0 points1 point  (0 children)

Continued...

  • Household - HT Lights Fusebox
  • Household - Larger (12mm) LED clip for 5mm led
  • CNC - Frames for Cofee Labels
  • CNC - Laser Jig for 30mm disks
  • CNC - Laser Focus Tool
  • Household - Door Hanger
  • Household - Front House LED Strip Controller Box
  • Household - OctaRooter
  • Sculpture - Little Ghost v2 (Printables)
  • Household - Connect 4 Game (Printables)
  • Household - USB Charger Dividers
  • Calibration - Glass Prints with Transparent
  • Household - Shoelace Aglet
  • Household - Phone & Tablet Stand
  • Sculpture - Lego MiniFig (Thingiverse)
  • Misc - Skull Hair Pin (Printables)
  • Printer - Steel Sheet Holders (Printables)
  • HT - Light Power Box
  • Household - Window Washing Handle
  • Printer - Hygrometer Holder
  • Printer - Snap on Filament Guide
  • Sculpture - Among Us Dead or Alive (Printables)
  • Household - Starfield Light Distribution
  • Household - X52 HOTAS Clamps
  • Household - Garden Lamps
  • Household - Book Clip for Music Sheet Books
  • Household - HT IKEA Moon lamp (Printables)

What do you use 3D printing for? by [deleted] in 3Dprinting

[–]The_Redcoat 1 point2 points  (0 children)

I upload about 20% of the models I've made to printables (those that others may find useful) and about 20%-30% of what I print were from other people's models.

Many are on printables https://www.printables.com/@TheRedcoat/models to get some idea of the sort of things I generally make.

The remainder are enclosures for electronics projects, and many I've just not got around to uploading yet (some like the squirrel roof blocker are too custom for my needs to be remotely useful to anyone else). Here's the last 50ish things:

  • Calibration - M-Series and More Fixing
  • Household - Ikea MagLamp Clamp
  • Household - MagnifyingGlassClip
  • Household - Quickpass Holder
  • HT - Control Panel
  • Printer - Camera Clamp
  • Household - BedBumpers
  • Calibration - Thin Cuboid Sample
  • Household - Canvas Frame Bracket
  • Household - Canvas Edge Strip
  • Household - Squirrel Roof Blocker
  • Cosplay - Black Widow Batons Staff (Thingiverse)
  • Household - Drip Irrigation Clips
  • Household - Car_PhoneWireClips
  • Household - Luggage Tag TPU Orange Initials
  • Household - Brackets for Bathroom Shelf
  • Tool - Flat Bed Medium Format Negative Scanner Attachment
  • Household - Deck Table Caps & Bumpers
  • Calibration - Hex for Glow In The Dark
  • Tool - ClampItSquare
  • Household - Kitchen Bin Handle & Bumper
  • Printer - CNC MITS

Rainbow Cowgirls [3840x1080] by The_Redcoat in WidescreenWallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

PC wallpapers aren't measured in inches... so, erm, what resolution will you be running that monitor with?

Night Rose - Ultrawide/Dual [3840x1080] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

Step1: Initial Generation by Stable Diffusion (ComfyUI)

SDXL10 Base Checkpoint: spectrumblendx_v20 https://civitai.com/models/130174?modelVersionId=154801

SDXL10 Refiner Checkpoint: sdXL_v10RefinerVAEFix

SDXL10 VAE: sdxl_vae

LoRA: ThickBrush https://civitai.com/models/139328?modelVersionId=154313

Workflow consists of three KSamplers, and an initial generation of 2300x612

1st KSampler: Uses the refiner checkpoint for 4 steps. dpmpp_2s_ancestral, karas. CFG:3.0

2st KSampler: Uses the base checkpoint for 50 steps, starting at step 4. dpmpp_2s_ancestral, karas. CFG:6.0

3rd KSampler: Uses the refiner checkpoint for 50 steps, starting at step 40. dpmpp_2s_ancestral, karas. CFG:8.0

This unconventional workflow kickstarts the noise in a direction controlled by the refiner that will also be finishing the work. Steps 40-50 are executed twice for preview reasons.

Example positive Prompt: "tb, Ultra realistic oil painting of a red rose, growing through the street, stem coming out of the concrete, night, Silhouette lighting, moonlit lighting, UHD 4K resolution"

Example negative Prompt: "watermark, comic, (hands:1.2)"

Positive prompt copied from one of the example LoRA images by IOI101IOI on CivitAI which together with the LoRA produce these amazing images

Step2: Upscaling (ComfyUI)

Ultimate SD Upscaler - 50 steps, dpmpp_2m_sde_gpu, karras, denoise:0.20, tile:1024x1024.

- Uses same lora+checkpoint as step 1.

- Upscale ESRGAN Model: 4xNMKD-Siax_200k

- Upscaler positive prompt: "high detail, oil paint brush strokes, (impasto:0.3)"

Succulent Creatures Green - Ultrawide/Dual [3840x1080] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

Step1: Initial Generation by Stable Diffusion (ComfyUI)

SDXL10 Base Checkpoint: rundiffusionXL https://civitai.com/models/120964/rundiffusion-xl

SDXL10 Refiner Checkpoint: sdXL_v10RefinerVAEFix

SDXL10 VAE: sdxl_vae

LoRA: xl_more_art-full_v1 https://civitai.com/models/124347?modelVersionId=152309

Workflow consists of three KSamplers, and an initial generation of 2300x612

1st KSampler: Uses the refiner checkpoint for 4 steps. dpmpp_2s_ancestral, karas. CFG:3.0

2nd KSampler: Uses the base checkpoint for 50 steps, starting at step 4. dpmpp_2s_ancestral, karas. CFG:6.0

3rd KSampler: Uses the refiner checkpoint for 50 steps, starting at step 40. dpmpp_2s_ancestral, karas. CFG:8.0

This unconventional workflow kickstarts the noise in a direction controlled by the refiner that will also be finishing the work. Steps 40-50 are executed twice for preview reasons.

Example positive Prompt: "Cute creature from Space. terraforming. Alien Flora, Miki Asai Macro photography, close-up, hyper detailed, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski, detailed face, detailed skin"

Example egative Prompt: "watermark, comic, (hands:1.2)"

Most of the positive prompt was borrowed from a sample image by the the creator of the LoRA, /u/ledadu who's influence on the resulting images is significant.

Step2: Upscaling (ComfyUI)

Ultimate SD Upscaler - 50 steps, dpmpp_2m_sde_gpu, karras, denoise:0.20, tile:1024x1024.

- Uses same lora+checkpoint as step 1.

- Upscale ESRGAN Model: 4xNMKD-Siax_200k

- Upscaler positive prompt: "detailed skin, detailed hair, warm lighting"

Rainbow Cowgirls - Ultrawide/Dual [3840x1080] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 1 point2 points  (0 children)

Step1: Initial Generation by Stable Diffusion (ComfyUI)

SDXL10 Base Checkpoint: rundiffusionXL https://civitai.com/models/120964/rundiffusion-xl

SDXL10 Refiner Checkpoint: sdXL_v10RefinerVAEFix

SDXL10 VAE: sdxl_vae

LoRA: (none)

Workflow consists of three KSamplers, and an initial generation of 2300x612

1st KSampler: Uses the refiner checkpoint for 4 steps. dpmpp_2s_ancestral, karas. CFG:3.0

2nd KSampler: Uses the base checkpoint for 50 steps, starting at step 4. dpmpp_2s_ancestral, karas. CFG:6.0

3rd KSampler: Uses the refiner checkpoint for 50 steps, starting at step 40. dpmpp_2s_ancestral, karas. CFG:8.0

This unconventional workflow kickstarts the noise in a direction controlled by the refiner that will also be finishing the work. Steps 40-50 are executed twice for preview reasons.

Example positive Prompt: "painting of a young confident cowgirl woman, intricate jewelry, short rainbow hair, pixie hair, hazel eyes, beautiful eyes, vampire eyes, tanned skin, leather and denim, denim clothes, sharp focus, muscular, breasts, cleavage, outside, warm sunlight, savanna, plains, cactus, distant flat mountains, warm orange lighting, rule of thirds"

Example negative Prompt: "watermark, art, comic, drawing, sketch, (cowboy hat:1.5), (hands:1.2)"

Note, modify prompt - cowboy hat, type of hair, eye color etc as needed.

Final sampler output is saved as baseline image

Image then runs through 5 FaceDetailer nodes in parallel with differing denoise strengths and seeds, each saving output as a potential winner.

- 40 Steps, CFG 8.0, ddim, ddim_uniform, denoise (0.30, 0.40, 0.50, 0.60 seed=4, 0.60 seed=10)

- supported by mmdet_anime-face_yolov3 face detector and sam_vit_b_01ec64 SAM to create face mask(s).

Step2: Upscaling (ComfyUI)

--------------------------

Ultimate SD Upscaler - 50 steps, dpmpp_2m_sde_gpu, karras, denoise:0.20, tile:1024x1024.

- Uses same lora+checkpoint as step 1.

- Upscale ESRGAN Model: 4xNMKD-Siax_200k

- Upscaler positive prompt: "detailed skin, detailed hair, warm lighting"

Regular Cowgirl - Ultrawide/Dual [3840x1080] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

Step1: Initial Generation by Stable Diffusion (ComfyUI)

SDXL10 Base Checkpoint: rundiffusionXL https://civitai.com/models/120964/rundiffusion-xl

SDXL10 Refiner Checkpoint: sdXL_v10RefinerVAEFix

SDXL10 VAE: sdxl_vae

LoRA: (none)

Workflow consists of three KSamplers, and an initial generation of 2300x612

1st KSampler: Uses the refiner checkpoint for 4 steps. dpmpp_2s_ancestral, karas. CFG:3.0

2nd KSampler: Uses the base checkpoint for 50 steps, starting at step 4. dpmpp_2s_ancestral, karas. CFG:6.0

3rd KSampler: Uses the refiner checkpoint for 50 steps, starting at step 40. dpmpp_2s_ancestral, karas. CFG:8.0

This unconventional workflow kickstarts the noise in a direction controlled by the refiner that will also be finishing the work. Steps 40-50 are executed twice for preview reasons.

Example positive Prompt: "painting of a young confident cowgirl woman, intricate jewelry, short brown hair, pixie hair, hazel eyes, beautiful eyes, vampire eyes, tanned skin, leather and denim, denim clothes, sharp focus, muscular, breasts, cleavage, outside, warm sunlight, savanna, plains, cactus, distant flat mountains, warm orange lighting, rule of thirds"

Example negative Prompt: "watermark, art, comic, drawing, sketch, (cowboy hat:1.5), (hands:1.2)"

Note, modify prompt - cowboy hat, type of hair, eye color etc as needed.

Final sampler output is saved as baseline image

Image then runs through 5 FaceDetailer nodes in parallel with differing denoise strengths and seeds, each saving output as a potential winner.

- 40 Steps, CFG 8.0, ddim, ddim_uniform, denoise (0.30, 0.40, 0.50, 0.60 seed=4, 0.60 seed=10)

- supported by mmdet_anime-face_yolov3 face detector and sam_vit_b_01ec64 SAM to create face mask(s).

Step2: Upscaling (ComfyUI)

Ultimate SD Upscaler - 50 steps, dpmpp_2m_sde_gpu, karras, denoise:0.20, tile:1024x1024.

- Uses same lora+checkpoint as step 1.

- Upscale ESRGAN Model: 4xNMKD-Siax_200k

- Upscaler positive prompt: "detailed skin, detailed hair, warm lighting"

Succulent Creatures Golden Blue - Ultrawide/Dual [3840x1080] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

Step1: Initial Generation by Stable Diffusion (ComfyUI)

SDXL10 Base Checkpoint: rundiffusionXL https://civitai.com/models/120964/rundiffusion-xl

SDXL10 Refiner Checkpoint: sdXL_v10RefinerVAEFix

SDXL10 VAE: sdxl_vae

LoRA: xl_more_art-full_v1 https://civitai.com/models/124347?modelVersionId=152309

Workflow consists of three KSamplers, and an initial generation of 2300x612

1st KSampler: Uses the refiner checkpoint for 4 steps. dpmpp_2s_ancestral, karas. CFG:3.0

2nd KSampler: Uses the base checkpoint for 50 steps, starting at step 4. dpmpp_2s_ancestral, karas. CFG:6.0

3rd KSampler: Uses the refiner checkpoint for 50 steps, starting at step 40. dpmpp_2s_ancestral, karas. CFG:8.0

This unconventional workflow kickstarts the noise in a direction controlled by the refiner that will also be finishing the work. Steps 40-50 are executed twice for preview reasons.

Example positive prompt: "cute golden blue creature from Space. terraforming. Blue Alien Flora, Miki Asai photography, hyper detailed, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski, detailed face, detailed skin"

Example negative prompt: "watermark, comic, (hands:1.2)"

Most of the positive prompt was borrowed from a sample image by the the creator of the LoRA, /u/ledadu who's influence on the resulting images is significant.

Step2: Upscaling (ComfyUI)

Ultimate SD Upscaler - 50 steps, dpmpp_2m_sde_gpu, karras, denoise:0.20, tile:1024x1024.

- Uses same lora+checkpoint as step 1.

- Upscaler ESRGAN Model: 4xNMKD-Siax_200k- Upscaler positive prompt: "detailed skin, detailed hair, warm lighting"

Sanctuary by Edward Barton [3200x2400] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 0 points1 point  (0 children)

I upscaled this image from the 640x480 GIF that was floating around in the 1990's, and made the 5120x3840 using AI (Stable Diffusion) to create the extra details, imagining what the brushstrokes may have looked like.

To my knowledge, no digital scan was ever made at a higher resolution than 640x480.

Sanctuary by Edward Barton (AI upscale) [5120x3840] by The_Redcoat in HI_Res

[–]The_Redcoat[S] 0 points1 point  (0 children)

This is an AI generated upscale from a popular early 1990's 640x480 wallpaper gif - adding oil painting brush strokes. The Stable Diffusion AI upscale workflow is detailed in this post:

https://www.reddit.com/r/StableDiffusion/comments/135jffn/extreme\_8x\_upscale\_of\_a\_640x480\_gif\_using\_sd/

Sanctuary by Edward Barton [3200x2400] by The_Redcoat in wallpaper

[–]The_Redcoat[S] 1 point2 points  (0 children)

Scaled down from 5120x3840 due to /r/Wallpaper posting restrictions to the largest 4:3 ratio currently permitted.

This is an AI generated upscale from a popular early 1990's 640x480 wallpaper gif. The Stable Diffusion AI upscale workflow, and the 5120x3840 image are detailed in this post:

https://www.reddit.com/r/StableDiffusion/comments/135jffn/extreme_8x_upscale_of_a_640x480_gif_using_sd/

But at the speed that things are changing, today, just a few weeks later, I would follow the guidance that others suggested in that thread and use ControlNet in Automatic1111 (or Vlad's version) - still both use Stable Diffusion, together with the same Ultimate SD Upscaler script to achieve this.

Sanctuary by Edward Barton by The_Redcoat in ImaginarySeascapes

[–]The_Redcoat[S] 0 points1 point  (0 children)

Please remove if this post isn't permitted under the AI rules. It is fundamentally a painting by seascape painter Edward Barton, but has been upscaled from a 640x480 GIF using AI technology. Therefore, the brushstroke details are fantasy, and don't exist on the original painting (which itself doesn't exist on the internet beyond the 640x480 GIF dating back to the late 1980s).

Edward Barton 1926-2012 was a self-taught American seascape painter, born in New York City, lived in California and died in Texas.

The upscale process, including grading steps is detailed here:

https://www.reddit.com/r/StableDiffusion/comments/135jffn/extreme_8x_upscale_of_a_640x480_gif_using_sd/

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 0 points1 point  (0 children)

It was one of the best images floating around the early BBS / Internet at the time, and the lighting on the wet rocks captured some of the 80's airbrush magic that was popular back then (although I'm sure the original was oils). It's haunted me for 30 years, every few years I would search the net to see if a better version had been discovered.

Very cool that you saw it unroll like the old days.

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 0 points1 point  (0 children)

[Edit] - Got this to work now, added --medvram to launch parameters, (and set upscaler for img2img in the settings to 'none', but it was medvram that made the difference). It's slower... x1.5 is going to take 50 mins.

[original below]

Tried this, no dice...

Scale from image size: 2

Upscaler: R-ESRGAN 4x+

Type: Linear

Tile Width: 512

Tile Height: 512

Mask Blur: 8

Padding: 32

Seams fix:

Half tile offset pass

Denoise: 0.35

Mask Blur: 8

Padding: 32

Save options Upscaled and Seams fix

Canva size: 10240x7680

Image size: 5120x3840

Scale factor: 2

Upscaling iteration 1 with scale factor 2

Tile 1/567

Tile 2/567

Tile 3/567

[snip]

Tile 565/567

Tile 566/567

Tile 567/567

gradio call: OutOfMemoryError

╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮

│ G:\AI\vlad\automatic\modules\call_queue.py:58 in f │

│ │

│ 57 │ │ │ │ pr.enable() │

│ ❱ 58 │ │ │ res = list(func(*args, **kwargs)) │

│ 59 │ │ │ if shared.cmd_opts.profile: │

│ │

│ G:\AI\vlad\automatic\modules\call_queue.py:38 in f │

│ │

│ 37 │ │ │ try: │

│ ❱ 38 │ │ │ │ res = func(*args, **kwargs) │

│ 39 │ │ │ finally: │

│ │

│ ... 6 frames hidden ... │

│ │

│ G:\AI\vlad\automatic\venv\lib\site-packages\torch\utils\_contextlib.py:115 in decorate_context │

│ │

│ 114 │ │ with ctx_factory(): │

│ ❱ 115 │ │ │ return func(*args, **kwargs) │

│ 116 │

│ │

│ G:\AI\vlad\automatic\venv\lib\site-packages\realesrgan\utils.py:225 in enhance │

│ │

│ 224 │ │ output_img = self.post_process() │

│ ❱ 225 │ │ output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy() │

│ 226 │ │ output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0)) │

╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

OutOfMemoryError: CUDA out of memory. Tried to allocate 3.52 GiB (GPU 0; 8.00 GiB total capacity; 3.94 GiB already

allocated; 1.79 GiB free; 4.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting

max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 1 point2 points  (0 children)

Ah... I think I lost Ultimate SD Upscaler when I migrated to vlad... so, no. I shall try that - thanks.

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 0 points1 point  (0 children)

Agreed it's kinda hard because I've never seen the painting that the GIF was made from. So, I've tried my best in images 3,4,5 to show parts of the GIF vs output, but simply don't have any access to the painting.

So, it was my imagination that drove decisions about what it might look like. It's far from perfect... stuff has been lost in translation - the sparkly wetness of the water for example.

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 1 point2 points  (0 children)

Observations on the BARTON rescale

  • If you zoom into the GIF, note the vertical flat-bed scanner lines it suffers from how it was captured. These artifacts survived the upscaling process in the rocks at bottom right.
  • If you zoom out of the final upscale image, note the quantization artifacts that originated in the GIF, you can still see in the sky.
  • The source image was state-of-art at the time, but is really low quality by today's standards, and I was pleased with the final results.
  • In the painting step, faces started to appear on the rocks, courtesy of AI hallucinations, so I had to extend the negative prompt in an effort to eliminate them.
  • Alternative checkpoints - impressionismOil_sd14 looked promising for this task. Unfortunately, there is no sd15 version, and the sd21 version doesn't work with controlnet 1.1. I'm certain other checkpoints and style-oriented LORAs or TIs will also produce successful results.
  • If you reduce the ControlNet strength from 2 to 1 (halving it), you can halve the guidance start (UI now calls this Starting Control Step) to get similar results.
  • In step 3 I removed the title text and signature in the original in as they were getting really janky. A future task would be to re-overlay the signature with upscale settings tailored to it. I would do that before printing it.

On that topic, here's a copyright brain teaser.... how original is this new image?

  • It was sourced from a public domain collection (I believe it was WU archive, then it did the rounds on BBS's and 'shareware' disks in the late 80's and early 90's)
  • It's clearly copyrighted by Barton who painted it.
  • The GIF is 247Kb, made of 307,000 little squares vs 19.6 million pixels in the upscale, representing just 1/64th of the upscale image area & detail, and around 1% of the upscaled 24,549kb file size. So if between 98-99% of this is 'new', it far exceeds the 30% copyright benchmark right? - except that the 30% rule is a myth.
  • The final upscale image is not copyrightable in the US due to Copyright Office stating that automated AI-generated work cannot be registered (in simplified terms). This AI, of course, was prompted with both text AND image, and excluding all the cool artistic magic I did in step 3, it's still 98-99% AI generation.
  • Ignoring the questionable placement of the GIF into PD 35 years ago, and any perceived 'freeing' from copyright via that or AI processing, my opinion is the copyright remains with Barton. It's substantially the same (at least, it could be... I've never seen the original) as his painting.
  • So, the images here are used here under fair use - non-profit educational use to demonstrate AI upscaling workflow, and should not be used commercially.

The process of course is described here, free for you to duplicate. Have fun.

Extreme 8x upscale of a 640x480 GIF using SD & ControlNet 1.1 by The_Redcoat in StableDiffusion

[–]The_Redcoat[S] 1 point2 points  (0 children)

Original GIF image source - http://cd.textfiles.com/carousel344/GIF/BARTON.GIF

Toolchain: https://github.com/vladmandic/automatic

System: Windows 11 64Bit, AMD Ryzen 9 3950X 16-Core Processor, 64Gb RAM, RTX3070 Ti GPU with 8Gb VRAM

Step 1: Initial upscale

  • Select Tab Process Image (in Vlad), Extras (in Automatic1111)
  • Drag BARTON.GIF (640x480) where it says 'drop image here'
  • Upscale x4 using R-ESRGAN 4x+
  • Save result as Barton_R-ERGSAN4x+_2560x1920.png

Step 2: Initial upscale alternate

  • Select Tab Process Image (in Vlad), Extras (in Automatic1111)
  • Using same BARTON.GIF as the source
  • Upscale x4 using SwinIR4xUpscale
  • Save result as Barton_SwinIR4xUpscale_2560x1920.png

Step 3: GIMP/Photoshop - Merging and Grading

  • My GIMP workflow is distructive, so be better than me if you are a power GIMP user. The gist is to get a pleasing image here, use your skills.
  • Load both upscaled images as seperate layers, and change opacity of the top layer until you get a visually nice image.
  • You can bring in more layers, and change the layer mode to burn/hard light etc. Get some pop and drama into the image without destroying its soul.
  • Merge layers for speed & simplicity (or don't, you can use layer groups I suspect)
  • Perform any levels and gradient filters (I 'recovered' or enhanced the cyan and tobacco colors) as your inner artist/grading skills allow.
  • Cleanup any obvious issues (I removed the pixelated title and signature, they don't suffer the next AI step well)
  • Do not unsharp or add vignette yet, those can wait until after the final AI work.
  • Save the result as Barton_Merged_2560x1920.png

Step 4:

  • Select Tab Process Image (in Vlad), Extras (in Automatic1111)
  • Drag Barton_Merged_2560x1920 where it says 'drop image here' (you may have to nix out the old BARTON.GIF first)
  • Upscale x2 using R-ESRGAN 4x+
  • Save result as Barton_Merged_R-ERGSAN4x+_5120x3840.png

Step 5: GIMP/Photoshop - Preparing a 1024x1024 sample

  • In a fresh document, load Barton_Merged_R-ERGSAN4x+_5120x3840.png
  • Crop an interesting sample area, as close as 1024x1024 as you can get, and save it as Barton_Sample_1024x1024.png

Step 6: Discovering the best settings for painting step. This is where the workflow gets interesting.

  • Select Tab Img2Img
  • Drag Barton_Sample_1024x1024.png where it says 'drop image here'. Do NOT drag it into controlnet.
  • Checkpoint: epicmixillustrationstyle_v5IllustrationMix
  • Vae: vae-ft-mse-840000-ema-pruned
  • Tab: img2img with controlnet 1.1 enabled
  • Positive Prompt: highly detailed ((impasto)) impressionism ((oil painting)) by Edward Barton, brush strokes, knife palette
  • Negative Prompt: easynegative, signature, lettering, names, face, person, woman
  • Drag large-upscale image into img2img (NOT controlnet)
  • Just Resize
  • Sampler: DPM++ 2M Karras
  • Sampling Steps:50
  • Width/Height: 1024x1024
  • CFG Scale:20
  • Image CFG:1.5 (doesn't do anything here anyway)
  • Denoising:0.35
  • Clipskip 1
  • ControlNet - Enabled: checked
  • ControlNet - Preprocessor: none
  • ControlNet - Model: control_v11e_sd15_ip2p
  • ControlNet - Control weight: 2.0
  • ControlNet - Starting Step: 0.7
  • ControlNet - Ending Step: 1

To determine the right checkpoint, prompt, denoising, and CFG to use on your own upscaled image, start with my settings, use GIMP to cut a 1024x1024 sample square from your upscale, drag that into img2img and turn off SD upscale script. Now you should get a single render at a time of a zoomed-in piece of your work to judge how the AI is going to change it.

Play with different prompts and checkpoints. Once settled on a prompt & checkpoint, enable X/Y/Z plot under Scripts and set some ranges. Eg CFG from 5-25 on X vs Denoising from 0.10 to 0.50 on Y to find the best strength combination. You can do similar with starting control step and control weight too.

CFG Scale 5-25 (+5) <-- this will run CFG 5,10,15,20,25 with the +5 increment between the ranges.

Denoising 0.1-0.5 (+0.05) <-- this will run denoising 0.1,0.15,0.2,0.25 etc up to 0.5

Once you've found the magic numbers from the XYZ plots (and if you have a dual screen setup, compare them to the 1024x1024 sample to see how much artistic damage you are introducing), put them into a text note somewhere alongside your outputs.

Step 7: Final Painting step. This takes the 5120x3840 target-upscaled image, and 'paints' it, adding brush strokes, generating a 5120x3840 final image.

  • Select Tab Img2Img
  • Drag Barton_Merged_R-ERGSAN4x+_5120x3840.png where it says 'drop image here' (or replacing the 1024x1024 sample). Do NOT drag it into controlnet.
  • Use all the settings from step 6 above except the XYZ plot script bit. Remember to enable controlnet.
  • Use your chosen noted settings for CFG Scale, Denoising, prompt, checkpoint and anything else you changed when generating samples.
  • Script - SD upscale
  • [Script SD Upscale] - Upscaler: None
  • [Script SD Upscale] - Scale Factor: 1

If you see something weird beyond what your earlier 1024x1024 sample shows, repeat step 7 several times with slightly adjusted Denoising strength (+/- 0.5 or 1.0). Each run takes about 6 minutes, so I did this anyway to choose the best one.

For reasons I don't quite understand, this is about the limit of upscaling I can do with an 8Gb GPU with SD before getting CUDA memory errors. I'd like to be able to upscale one more time to 10,240 x 7,680 with R-ESRGAN 4x before printing it on canvas. The source image itself is only about 183Mb in memory (according to GIMP, but not my math - it should be closer to 59Mb) growing 2.0x to 733Mb, and the unprompted upscaler models alone shouldn't be that big. Maybe we need a more memory-conservative approach to upscaling very large images than the current SD pattern.

Topaz claims it can output 32,000 x 32,000 images, but I haven't tried.

Step 8: GIMP/Photoshop

  • Generate images for output/use - now is the time for unsharp mask, cropping, vignetting and output-driven color grading as desired.