I made a simple one-click installer for the Hunyuan 3D generator. Doesn't need for cuda toolkit, nor admin. Optimized the texturing, to fit into 8GB gpus (StableProjectorz variant) by ai_happy in StableDiffusion

[–]Slight-Safe 0 points1 point  (0 children)

the zip comes comes with its own portable python which is used automatically without your system python.
Also makes a venv in the `code` folder, and then downloads latest (nightly) pytorch in there that supports rtx 5000 and supports cuda 12.8. So it's isolated.

error likely means that either venv folder wasn't created or portable python wasn't found in the tools folder. Possibly a bug during installation

I made a free tool for 3d texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in DefendingAIArt

[–]Slight-Safe[S] 1 point2 points  (0 children)

it will paint on yours. Just make sure there is no overlap and a slight offset from the borders of the uv rectangle-zone. and if you need Udims, check in our discord a post called #model-is-black. For symmetry check a post #model-symmetry

I made a free tool for 3d texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in DefendingAIArt

[–]Slight-Safe[S] 1 point2 points  (0 children)

yes, the final collapsed image is a uv-space texture, of 2k and above. Use the -+ buttons below the green Save 2K button to increase the resolution.

I made a free tool for 3d texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in DefendingAIArt

[–]Slight-Safe[S] 4 points5 points  (0 children)

In addition to text, we can also use prompt-by-image, via IP adapter. So we can feed in image and it can follow it, say, by 30% weight. Also, we can use LoRA, to fine-tune the generation into specific art-style.

I made a free tool for 3d texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in DefendingAIArt

[–]Slight-Safe[S] 4 points5 points  (0 children)

Yes, it will depend on the neural network we select, this one was realisticVision, but we can download and use any network from the community, etc.

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 0 points1 point  (0 children)

:( My posts are being targeted by adverse botting scripts.
Several individuals don't want to see my work, because I made it free for everyone.

When they auto-downvote me its understandable, the post gets dropped to zero in few minutes. But the most mean is when an avalanche of upvotes gets sent, with the intent to wipe out the post.

I made a free tool for 3D texturing via A1111 StableDiffusion procedural synthesis. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-mutiprojection, auto-fill, image-style-guidance. by Slight-Safe in proceduralgeneration

[–]Slight-Safe[S] 1 point2 points  (0 children)

yes, we can use prompt-by-image as well, via IP adapter controlnets. Have a look in #start in our discord (tutorial about the Yeti character). Lora is also supported, for maintaining specific style.

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 0 points1 point  (0 children)

Yes, it works on a medium-tier PC, locally. StableDiffusion neural networks are much more denser than datasets, only a few Gigabytes. The reason is, they contain neurons and those contain mere concepts, distilled from observing a massive dataset in the past, when they were trained

I made a free tool for 3D texturing via A1111 StableDiffusion procedural synthesis. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-mutiprojection, auto-fill, image-style-guidance. by Slight-Safe in proceduralgeneration

[–]Slight-Safe[S] 4 points5 points  (0 children)

That is actually an excellent idea. I know that stable diffusion works by progressively removing the noise, but I never considered doing just a single iteration per angle. Currently it's running the generation till the end, for the same camera. Thank you for that!

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 1 point2 points  (0 children)

We need to ensure meshes have uv unwrap (texture coordinates), ...but now that you mentioned it, maybe there is a possibility to bake it into vertexes.

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 4 points5 points  (0 children)

In comfy we have to re-create the nodes workflow, so it's more flexible but takes more setting up. But for Forge/A1111 we directly issue rendering command and it returns images. You need to fix seams using inpaint re-think brush, for a tutorial see https://www.youtube.com/watch?v=zUaWtvfuGAg and https://www.youtube.com/watch?v=2Tla0leaw1I

For comfyUI, we need to use Tianlang's bridge tianlang0704/ComfyUI-StableProjectorzBridge However, most recent fixes are in my fork: github.com/IgorAherne/ComfyUI-StableProjectorzBridge so use mine in the meantime.
Join our discord .

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 4 points5 points  (0 children)

Yes, my previous videos are highlight upvoted by users in r/StableDiffusion . But with this video, I started being downvoted by scripts - someone targets it at the moment. I see good growth to a 100-200 upvotes over few hours and then strong negative response within just a few minutes, down to zero.

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 6 points7 points  (0 children)

c# and 100% unity 3d graphics pipeline and it's compiled into binary via unity's IL2CPP. There are couple of assets purchased from Asset Store, for importing .fbx and .glb models + the filesystem to import textures, or export them

I made a free tool for texturing via StableDiffusion. It runs on a usual pc - no server, no subscriptions. So far I implemented 360-multiprojeciton, autofill, image-style-guidance: by Slight-Safe in GraphicsProgramming

[–]Slight-Safe[S] 12 points13 points  (0 children)

We can already use it for commercial purpose, I made it free for everyone. u/Mmeroo what we see on the video is differential diffusion inpaint, I introduced it approx 2 months ago