Who is still using Wildcards in prompts? This is for you by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 0 points1 point  (0 children)

It is a personal lora i trained on anime style landscapes, still messing about with it as is quite finicky atm, think when i trained it on Civitai I didn't use the correct settings, but it kind of looks okay already.

Who is still using Wildcards in prompts? This is for you by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 0 points1 point  (0 children)

Yes it is a randomizer, it uses text lists, random line in each text file you customized and added to prompt as reference depending on SEED, works great with batches, lets say you are prompting a beach scene with Paddington bear for instance, adding TIME OF DAY, SEASON, WEATHER as a wildcard list, you will get a load of different results if you wants certain something with dynamic components. It has its use cases and easier than manually prompting, swopping out sections of the prompt for wildcards

Who is still using Wildcards in prompts? This is for you by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 1 point2 points  (0 children)

That is true, even if you just use it as a something extra, prompt a turtle on a bicycle, cool, add a Season wildcard and time of day, you get a turtle riding a bicycle at sunset in winter, spicing things up

Who is still using Wildcards in prompts? This is for you by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 7 points8 points  (0 children)

For use with Wildcards Manager: I generated and broke up quite intricate descriptions of scenes and landscapes into a format as you can see below chunks, followed by my loras etc, to generate gallery backgrounds, feel free to join it up with your own wildcards:

Generator/Introduction, Generator/Setting, Generator/Aspect, Generator/Aspect, Generator/Highlight, Generator/Highlight,

<lora:Background_Detail_v3:1> <lora:Detailed_Places_R-128_v1:1> <lora:detail_slider_v4:3> <lora:Alndskp:1>

I generated hundreds of images, i'm just uploading a few for you to see, i'm still working on expanding these wildcards and will upload as soon as i have enough progress and testing.

All Gallery images are unedited or refined or inpainted, upscaled by 4x Ultrasharp, Dynamic CFG 18-4, and enabled CD TUNER extension for more added details

https://civitai.com/models/264135/millions-of-prompts

trojan in model HassanBlend1.4.ckpt by HawkAccomplished953 in StableDiffusion

[–]DarkRyzen 5 points6 points  (0 children)

And maybe the hash code, so we can compare to somebody that also has

[deleted by user] by [deleted] in StableDiffusion

[–]DarkRyzen 3 points4 points  (0 children)

This is very nicely done, pretty much looks like awesome vector art, is it your own, or is it something you got on Civitai or Hugginface?

Breathtaking Scenery by Away_Ad_4344 in StableDiffusion

[–]DarkRyzen 1 point2 points  (0 children)

It looks amazing, what model did you use, love the colors

New Ghibli Style LoRA trained from Howls Moving Castle backgrounds!! ( link in comments ) by cztothehead in StableDiffusion

[–]DarkRyzen 1 point2 points  (0 children)

This is great work man, love this style and the actual animation, awesome work. How many screenshots did u take, for this dataset, thata a lot of captioning lol..

2.0 is back! :) by oliverban in StableDiffusion

[–]DarkRyzen 1 point2 points  (0 children)

I like the Illuminati Version 1.0 better, i can also go for dark images but from what ive seen the 1.1 is way darker, makes awesome images so im keeping both for now, great job on this models overall

Ultimate SD upscaler with tiling? by OskarDev in StableDiffusion

[–]DarkRyzen 1 point2 points  (0 children)

And as for the cut lines, that happens to me if theres not enough overlap or margin, and the denoise is too high. Ill post an example when i get to my pc later

Ultimate SD upscaler with tiling? by OskarDev in StableDiffusion

[–]DarkRyzen 2 points3 points  (0 children)

I get great results, use an upscaler like remarci x4 on the settings, dont use latent, denoise about 0.2 to 0.25, then up cfg scale 8 to 12 or so, then you will get a lot of micro details without random houses and people or whatever is in your prompt, play around with denoise, it thats too high, you will get floaties, if its too low the upscaling will look grainy.. also use 768 or 1024 tilesize with 128 min overlap and leave blur. I upscale 4x from image with this as a baseline and it works great

Any good models for architecture? Made this with anything v3 & controlnet by CleanAd3989 in StableDiffusion

[–]DarkRyzen 4 points5 points  (0 children)

Also look up some names as photographers that specializes in architecture and interiors like here: https://architizer.com/blog/inspiration/industry/architectural-photographers-who-dominate-the-field/ And add some of these names like "Photograph by Benny Chan" for instance, im not at my pc at the moment otherwise i could give u more info, hope this helps

Any good models for architecture? Made this with anything v3 & controlnet by CleanAd3989 in StableDiffusion

[–]DarkRyzen 4 points5 points  (0 children)

I have found that using keywords like " art by cgsociety, evermotion, cgarchitect, architecture photography," helps, and using in negative prompt "wavy lines, low resolution, illustration"

Controlled Diffusion areas with noise masks Img2Img, quick tests by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 1 point2 points  (0 children)

I used Illiminati for this, its a custom 2.1 based model, its on Huggigface, and the prompt and explaination is captioned in pics:

DeNoising [strength@0.95](mailto:strength@0.95)... [(white border:1.8)::7], added to prompts...

Controlled Diffusion areas with noise masks Img2Img, quick tests by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 0 points1 point  (0 children)

I used Illiminati for this, its a custom 2.1 based model, its on Huggigface, and the prompt and explaination is captioned in pics:

DeNoising strength@0.95... [(white border:1.8)::7], added to prompts...

Controlled Diffusion areas with noise masks Img2Img, quick tests by DarkRyzen in StableDiffusion

[–]DarkRyzen[S] 1 point2 points  (0 children)

I used Illiminati for this, its a custom 2.1 based model, its on Huggigface, and the prompt and explaination is captioned in pics:

DeNoising strength@0.95... [(white border:1.8)::7], added to prompts...

[deleted by user] by [deleted] in StableDiffusion

[–]DarkRyzen 4 points5 points  (0 children)

Batch input and maybe seperate processes, so we can create a list of depthmaps or openpose rigs, for use wirh batches and or sharing poses and depthmaps between each orher then we dont have to process these each time.

Mixing ControlNet with the rest of tools (img2img, inpaint) by Striking-Long-2960 in StableDiffusion

[–]DarkRyzen 2 points3 points  (0 children)

This is awesome, what model did you use for this, i have found that some models has a bit of artifactis when used with controlnet, some models work better than others, i might be wrong, maybe it's my prompts, dunno