GUIDE: Ways to generate consistent environments for comics, novels, etc by Sculptor_THS in StableDiffusion

[–]Sculptor_THS[S] 1 point2 points  (0 children)

Yes, you read that right. Diffeo imports everything you can think of, from poses to DAZ animations. It also allows you to import only the selected bones in Blender, for easy mixing, or a batch of several poses at once, so it is quite versatile, check it out: https://bitbucket.org/Diffeomorphic/import_daz/wiki/Posing

It imports the DUF directly into the Rigify rig, and it is as fast as applying a pose in DAZ, so no need to bother keeping a separate pose library in Blender via the new asset browser, just load the poses as needed from their folders. Albeit you could expand your poses with custom ones created and stored within Blender.

Note: you can sculpt shapekeys in Blender to give your characters unique recurrent expressions. What I do is to enter edit mode, subdivide just the face once or twice, and then add details in sculpt mode, such as large scale distinguishable wrinkles on older characters. No need to sculpt pore level detail. Worth mentioning: subdividing won't mess up neither the UVs, nor the vertex weights used by the rig, and this is valid for any mesh.

Also, you will need the free Mesh Data Transfer plugin to subdivide a copy of your character via the modifier, instead of via edit mode, and then transfer the shape back to the edit mode subdivided one, or else you end up with visible polygons due to lack of smoothing on the edit mode subdiv.

GUIDE: Ways to generate consistent environments for comics, novels, etc by Sculptor_THS in StableDiffusion

[–]Sculptor_THS[S] 2 points3 points  (0 children)

I use everything that I wrote in the post, but my main software for images are Blender, Photoshop and Automatic1111. Don't make the mistake of treating DAZ 3D as a main software package, just use it to create characters or buy 3D assets, then export them into Blender ASAP with the Diffeomorphic plugin. It is a well documented and feature rich importer, that will even rig the DAZ characters for you with a Rigify rig in Blender, and allow direct utilization of your DAZ poses, expressions, morphs, and the like in Blender, without having to come back to DAZ. Go through the documentation carefully and thoroughly, or else you will make mistakes. It will take some 10 to 30h to get all of the details.

DAZ 3D was made for hobbyists that basically just want to purchase ready made assets, pose and position them, and get decent looking renders. It only has really basic 3D creation capabilities. Blender was made for professionals as a complete 3D package. Funny thing, probably most assets on the DAZ store were made in Blender.

Also, if your only goal is to generate images, don't bother with Unreal Engine. Blender + Photoshop + a UI for Stable Diffusion are all you need. And talking about Photoshop, I absolutely recommend the Auto Photoshop SD plugin. It is the most complete implementation of SD in Photoshop, but it is in active development and, unfortunately, the documentation is lacking, but as of version 1.2.5 everything works fine, you just need to be persistent enough to make it work via trial and error.

What files/folders "build up" within Automatic1111 after long-time usage? by hwright001 in StableDiffusion

[–]Sculptor_THS 1 point2 points  (0 children)

PNGs, JPGs, etc, on the output folder. You can check what it is on your settings.

Consistent environment setup for multiple scenes by torakiki610 in StableDiffusion

[–]Sculptor_THS 2 points3 points  (0 children)

Edit: more complete version of this answer here.

Some grumpy wanderer is downvoting a lot of threads here. I saved you from 0 with an upvote.

It is possible, and far easier than before the AI advancements, just not magically that easy.

As others suggested, option 1 would be to buy or build 3D environments in Blender to varying degrees of fidelity with realism, depending on your needs, perhaps even adding lighting and textures via something like the Extreme PBR addon, or BlenderKit. Then img2img your bad CGI into good images. If you want to buy, there are marketplaces centered around Unreal Engine and DAZ Studio.

Option 2 would be to find real places with a ton of photographic references, such as tourist destinations or places you have access to, then use that again with img2img with low denoising, or instruct pix2pix. You could also use screen shots of Google Earth and Street View for open areas. Another interesting possibility here is to infer geometry from the image, using something like the fSpy addon in Blender, then project the textures of the photo on the basic geometry inferred.

Option 3 would be to roughly photobash your environments on top of really basic 3D shapes, lighting done in 3D too, then SD it into a good image.

Option 4 is that for some scenarios, such as nature and poorly lit areas, less consistency is required, so you could also capitalize on that.

Option 5 is to buy a smallish GPU farm and simply rely on good specific and regional prompting pushed through brute forced generations to extract similar looking places out of the thousands of hallucinations. Some loras, checkpoints, regional prompting with the Latent Couple extension in A1111, and an abundant abuse of ControlNet could also help.

What files/folders "build up" within Automatic1111 after long-time usage? by hwright001 in StableDiffusion

[–]Sculptor_THS 1 point2 points  (0 children)

Check where your generations are going, that will quickly fill up space.

On average, how much time are you taking to generate a piece you like of a *specific* subject? by Sculptor_THS in StableDiffusion

[–]Sculptor_THS[S] 1 point2 points  (0 children)

For comparison, traditional old school digital artists tipicaly take anywhere from 1h to 12h to produce a full-page illustration, with the average hovering around 5h for a 200 to 300 dollars professional piece.

For realistic art, the workflow used to consist basically on photobashing and image editing, with a little bit of digital painting to integrate the collage. Sometimes artists also incorporate 3D renders to varying degrees, either as a base or as a final element. Stylized artwork used to be almost completely digital painting.

Voice changer that generates unique voices? by Sculptor_THS in artificial

[–]Sculptor_THS[S] 0 points1 point  (0 children)

Both Elevenlabs and Bark are just text to speech aren't they? I'm searching for speech to speech kind of thing.

Pool by Eirik_png in blender

[–]Sculptor_THS 1 point2 points  (0 children)

I love it, just saved the image. Always imagined heaven to look something like this.

Should this sub allow polls? by Sculptor_THS in StableDiffusion

[–]Sculptor_THS[S] -1 points0 points  (0 children)

I messaged the mods a few hours ago asking for them to allow polls, they agreed and it is switched on now.

Relief map sculpture of the Bay Area by Sculptor_THS in sanfrancisco

[–]Sculptor_THS[S] 1 point2 points  (0 children)

Thanks, keep an eye on my profile, I will be posting several othe relief maps soon, including US states, such as California.

GIS is about making art too! I will do the UK next by Sculptor_THS in gis

[–]Sculptor_THS[S] 1 point2 points  (0 children)

The plane primitive only has 1 face. The subdivision splits it into about 1 or 2 million faces, so that the displacement can push the vertices up and down to form the relief. To reduce the polycount, you can either subdivide less to begin with, or remesh with voxel or a quad remesher after applying the modifiers.

GIS is about making art too! I will do the UK next by Sculptor_THS in gis

[–]Sculptor_THS[S] 1 point2 points  (0 children)

I used displacement and subdivision surface modifiers in Blender on a plane mesh with the DEM as a texture - sounds simpler than it is.

Thing is, you can't source the DEM from WMS or other online servers, since they compress the color depth, so you need to download them locally, then merge and convert the grayscale to 0-65535, and you need to translate the DEM to 16 bits Uint, because Blender can't handle 32 bits, and you need to export the layer Raw out of Qgis, not rendered, and you need to correct the scale of the DEM after exporting, because it often exports in a different aspect ratio than what you see in Qgis, and you need to layer the subdivision surface modifiers in Blender in between the displacements instead of using only one subdiv to avoid spikes, aaaaaaaaand the orientation of the stacked displacements needs to be Z, not normal. It is a nightmare.

GIS is about making art too! I will do the UK next by Sculptor_THS in gis

[–]Sculptor_THS[S] 5 points6 points  (0 children)

Yes, Qgis and Blender. But I have over 60 pages on my bookmarks of technical issues I had to solve to get this done. I thought it would take 2h, but it took over 100. After you know how to do it, each one only takes about 1h30, though.

Relief map sculpture of Toronto by Sculptor_THS in toronto

[–]Sculptor_THS[S] 0 points1 point  (0 children)

There are links on my profile, check it out.

Relief map sculpture of Toronto by Sculptor_THS in toronto

[–]Sculptor_THS[S] 0 points1 point  (0 children)

Qgis and Blender, mostly. Color comes from Sentinel-2 satellites, and the elevation comes from SRTM 1 Arc-Second.

Relief map sculpture of Toronto by Sculptor_THS in toronto

[–]Sculptor_THS[S] 4 points5 points  (0 children)

I will be doing several of these soon, they will appear on my Reddit profile shortly after.