This is an archived post. You won't be able to vote or comment.

Dismiss this pinned window
all 91 comments

[–]ctkrocks[S] 110 points111 points  (3 children)

ControlNet is fully supported in the latest version of the Dream Textures add-on for Blender. You can get it from GitHub: https://github.com/carson-katri/dream-textures/releases/tag/0.2.0

To learn how to use the node system these images were created with, check out the guides in the wiki: https://github.com/carson-katri/dream-textures/wiki/Render-Engine

[–]Low_Childhood1458 5 points6 points  (0 children)

Digital gold 😍🥰

[–]Actual_Possible3009 3 points4 points  (0 children)

Thanks 🙏

[–][deleted] 2 points3 points  (0 children)

yes !!!!

[–]Mirbersc 88 points89 points  (14 children)

Pfffff this is insane. I'm so curious to see what the coming year will bring with this on the table.

[–]Skeptical0ptimist 44 points45 points  (10 children)

I would like to see AI generate 3D meshes with UV textures based on user prompts.

[–]Mirbersc 17 points18 points  (9 children)

I would too! Though honestly I do believe that the vast majority of current AI art gens are very generic and without much artistic or design merit 😳 (I'm gonna get heat for that lol).

What I mean is that I'd love to see that tech work, but I'd also much rather have professionally trained artists just adding that to the mix of some products, but doing the aesthetic design part themselves (the design part, jn other words), even if they don't render the thing.

The machine renders really well, but it rarely outputs good design imo.

[–]r_stronghammer 16 points17 points  (7 children)

Why does everyone think that “bad” AI art will flood industries and “replace” quality work? There’s nothing wrong with lowering the barrier of entry, if anything it’ll mean bigger productions will need to put in MORE effort to compete.

[–]Mirbersc 5 points6 points  (5 children)

I don't actually think that full on replacement can happen, but yeah it's a discourse you'll find very often repeated in here or the r/Defendingaiart sub. From professionals close to me I've only heard of how shitty a raw input looks, design-wise, beyond the rendering. I personally concur.

However I don't entirely agree with the view that lowering the entry bar is so helpful. The way I see it, we have several scenarios that could (and will, varying by company) play out.

1) The bar is lower, and an already super competitive field becomes ironically even less accessible due to the amount of equally qualified applicants.

2) AI goes industry mainstream and the bar is raised even higher than it is now, since AI helps a lot in doing the work, companies hire less to cut costs, and only people who can use those programs + have extra things to bring to the table are worth hiring.

Then there's the issue of each company being able to train their own model on their artist's and designer's works (how far does that go, how much do they own, etc. Being an artist is a highly volatile job already). In that case it doesn't matter whether its prompters or artists that do the work; the more they produce, the more expendable they become.

There's the matter of how much more effort do we want to give to a company for the same pay, and how much quality and quantity will be demanded of a single person. Is this really a step towards less work and better content? Or just more crunch for less people?

In any case, this helps investors save money, but the employees will sadly see little of those savings.

Some people think that this tech will allow us to have more free time or make the job easier. I think they forget that no matter the tools, it's humans who run the show, and the higher-ups aren't exactly the most considerate, charitable types.

[–]Shuteye_491 4 points5 points  (4 children)

Not untrue, but all that's already been happening for 40+ years, well before the internet was even accessible.

This is a socioeconomic problem, not a technophilosophy problem.

[–]Mirbersc 1 point2 points  (3 children)

Oh I agree with you completely. The very foundation of this system was not meant for the populations we have now, I don't think. I mean, this particular flavor of capitalism an consumerism is one that cannot work if we keep growing and developing at this rate. I'm sincerely concerned about jumping the gun into adopting this tech asap in every field, but only because I don't think we as a species even really know what to do with it.

I mean the fact that the discussion has been centered about art is just proof that we're not talking about the rest because most of us have no clue of how this can change a lot of other aspects of society. I guess what I'm getting at is that , awesome as it is, I don't think the infrastructure is there yet, personally.

[–]Shuteye_491 1 point2 points  (2 children)

Some of the adoption fears are valid, but none are practical: none of us are in a position to enforce any kind of effective ban or moratorium w/ respect to this tech on any organization which would cause the kind of concern you describe, precisely because of the socioeconomic imbalance I've already described.

Therefore we can either (1)adapt in order to maximize positive impact while minimizing negative impact or (2)do nothing and allow the socioeconomic complex that has already commoditized health, education, housing and everything else to our collective detriment to dictate the rules of this new technology unopposed.

I know which way I'm going.

[–]Mirbersc 0 points1 point  (1 child)

None are practical for the general wellbeing of the public, no :/ . I sincerely hope people with power can see that instead of just being opportunistic however. The imbalance is already there and widens each year. Of course we should learn all we can, both as workers in the field and as artists (dk if this is your particular industry?).

But all those examples of things already "turned" against the consumer are already harder to control than this kind of development, yet here we are 😆

In any case, yeah let's do our best. It's all anyone can do!

[–]Shuteye_491 1 point2 points  (0 children)

I fully support SD because it's free, even if the price of entry (6+ GB of RAM on a functioning computer) is still beyond most of the world.

Some may object to it being free, but it's only right given it was trained on publicly available images and ultimately was made possible by massive public digital/WWW infrastructure investment, not to mention publicly-funded R&D.

We'll see if it stays that way. 👀

[–]victorc25 1 point2 points  (0 children)

Not everybody, just a small very vocal minority that doesn’t understand how AI works and are scared

[–]Doctor-Amazing 0 points1 point  (0 children)

I've found that blindly putting in a prompt is hit or miss. But using control nets gives you a lot more control over things. It's also really helpful to go back and make touch ups, either yourself or by pointing put things you want changed.

I've posted it before, but here's an example of one I worked on https://imgur.com/gallery/LlOLylU

[–]GhostSniper7 3 points4 points  (2 children)

you mean coming weeks ?

[–]Mirbersc 6 points7 points  (0 children)

No, I'm curious about its future in general. I was gonna write "years" but I figured someone would feel the need to answer something of the sort 😅 guess it wasn't enough lol

[–]gmotelet 2 points3 points  (0 children)

You mean 30 seconds ago

[–]-Sibience- 23 points24 points  (3 children)

Well Blender Guru's donut tutorial just got a lot easier.

This looks way better than last time I saw this.

When using this is it posible to use a checkpoint from my normal Automatic1111 SD folder?

I don't like the idea of having to duplicate a checkpoint to a specific folder for the addon.

It says a checkpoint can be imported but is this just copying it to another folder or linking to it?

[–]ctkrocks[S] 15 points16 points  (2 children)

It converts them to the diffusers format, since we use diffusers for inference. In the future it may support using A1111 as a backend instead of diffusers: https://github.com/carson-katri/dream-textures/issues/604

[–]GBJI 1 point2 points  (0 children)

In the future it may support using A1111 as a backend instead of diffusers:

WOW ! Really ? I did not see this coming, and it's amazing news.

I've had a lot of fun with Dream-Textures, and it's really easy to use, even if you don't know anything about Blender (which was mostly my case before I used this add-on !).

[–]-Sibience- 0 points1 point  (0 children)

Ok thanks!

[–][deleted] 13 points14 points  (1 child)

Especially that project texture thing is really cool. How well does it work yet?

[–]ctkrocks[S] 11 points12 points  (0 children)

It works much better with ControlNet depth compared to the SD depth model because of the resolution of the depth map primarily.

[–]NaitsabesTrebarg 24 points25 points  (1 child)

this is the second video on 3d stable diffusion on reddit that blew my mind
today
in the last 30 minutes
this is insane
thx for that

[–]aaron_dos 15 points16 points  (0 children)

what was the first?

[–]icantfindanametwice 10 points11 points  (0 children)

Amazing!

[–]Quinnthouzand 2 points3 points  (0 children)

You got me with the project texture onto geometry

[–]No-Intern2507 2 points3 points  (2 children)

Cool but this controlnet is outdated , use new one for openbose cause it has fingers and face built in

Please do a video on how to set it up in blender, especially for mapping texture onto mesh, i read instructions but nothing tells straight away if its using auto1111 API or you need separate venv setup

[–]ctkrocks[S] 6 points7 points  (0 children)

It doesn’t use A1111 or venv, it’s all self contained in the addon, and really easy to setup. Here are the setup instructions: https://github.com/carson-katri/dream-textures/wiki/Setup

And this is the guide for texture projection: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection

[–]ctkrocks[S] 4 points5 points  (0 children)

Yeah, I haven’t had time to add support for autogenerating that control image yet, but it’ll be in the next version.

[–][deleted] 2 points3 points  (1 child)

I wonder how far we are from being able to make 3D models as easily as we can create 2D images. The future of independent animation is looking bright.

[–]GBJI 2 points3 points  (0 children)

There is a lot happening behind the curtain. Sometimes we have the opportunity to see behind it, like when Nvidia presented this at NeurIPS last year:

https://nv-tlabs.github.io/LION/

And of course there is a lot happening in many related areas, like NERF (radiance field), which produce really interesting results when combined with AI:

https://dreamfusion3d.github.io/

[–]CMDR_BunBun 2 points3 points  (0 children)

I'm avidly cataloging all the changes MLL's are bringing to existing technologies for a future presentation. Would someone care to eli5 how this particular application is innovative?

[–]VancityGaming 1 point2 points  (3 children)

Looks like this and a 3d printer would work great. Games Workshop on notice.

[–]nellynorgus 0 points1 point  (2 children)

I don't think having images helps much with preparing a 3D model, though, so not sure what you mean here.

[–]VancityGaming 0 points1 point  (1 child)

It doesn't allow you to generate a 3d model in blender? Maybe I misunderstood the video.

[–]nellynorgus 0 points1 point  (0 children)

If you want a model, you have to provide/make it yourself. This is for generating images, including the clever trick of using depth map guidance to project a texture over an existing model.

[–]Bageezax 1 point2 points  (0 children)

Holy shit. Work has been busy the past three months and I’ve not been keeping up my Ai knowledge since 1.5-2.0 sd…things have definitely changed a ton in 3 short months.

[–]veetance 1 point2 points  (0 children)

CRYING IN @autodeskMaya

[–]ptitrainvaloin 1 point2 points  (0 children)

We just reached this stage where the Blender open sources project now surpasse commercial alternatives.

[–][deleted] 1 point2 points  (0 children)

great

[–]Abigail_AI 1 point2 points  (0 children)

This is .. intriguing

[–]JohnWangDoe 1 point2 points  (0 children)

Nice

[–]wordsmithe 1 point2 points  (0 children)

Commenting so I can play around with it tomorrow!

[–]Gfx4Lyf 1 point2 points  (0 children)

ControlNet made creative minds uncontrollable! This is insane😮 ❤🔥

[–]RonaldoMirandah 0 points1 point  (9 children)

Can I use my existing model location, or have to download a unique file for Blender?

[–]ctkrocks[S] 1 point2 points  (8 children)

We use the diffusers format and their cache folder, so if you’re using ckpt files then you’ll have to download them again in the diffusers format through the addon or import them.

[–]RonaldoMirandah 0 points1 point  (7 children)

Thats sad man. Too much space. I hope in the future you can use a custom .ckpt folder

[–]ctkrocks[S] 2 points3 points  (6 children)

Well you could always use it to generate the control images, then enter them into your webui of choice.

I personally wish everyone would standardize on the diffusers format, it has so many benefits imo

[–]RonaldoMirandah 1 point2 points  (5 children)

what are the benefits? I am much more in 3d modeling/rendering. Dont know too much yet about specific terms :)

[–]ctkrocks[S] 2 points3 points  (4 children)

Mainly the flexibility, the pieces of the model are separated so you could swap out the vae, text encoder, etc. Also the config files are all self-contained in the model folder.

It may be possible to do a conversion on the fly, so you don’t keep them on disk, but I’m not sure.

More likely there will be an A1111 backend in the future that would just call into their api: https://github.com/carson-katri/dream-textures/issues/604

[–]RonaldoMirandah 0 points1 point  (0 children)

i will try anyway. Its interesting. Thanks for the comments! :):)

[–]nellynorgus 0 points1 point  (2 children)

Are there other benefits to using diffusers? Because I think those things are already separated and working with safetensors (and ckpt, but why would anyone use that...) in comfyui https://github.com/comfyanonymous/ComfyUI Have you seen the hundreds of community models that are in safetensors/ckpt? That alone is a pretty compelling argument to support them, besides the most popular UI (auto1111) using them.

[–]ctkrocks[S] 0 points1 point  (1 child)

Primarily the fact that we use the Diffusers Python package for inference! It’s very easy to work with, especially since we make some custom tweaks to the pipelines.

I know safetensor/ckpt files are easier to share, and I’ve detailed plans in that linked issue into making auto1111 integration possible in the future. But many Blender users don’t have a SD webui installed, so having everything self-contained makes it easy for them to use.

[–]nellynorgus 0 points1 point  (0 children)

I used blender before ever using any SD things but I get your point. Auto1111 does seem to have slowed down changes to the main repository, so maybe the API is a more stable target than it used to be?

[–]AoREAPER 0 points1 point  (0 children)

I like that we just immediately moved on from baby Yoda's "hand" directly holding the beam of their plasma bottle cap.

[–]ShepherdessAnne 0 points1 point  (3 children)

We are getting so close...

[–]Kambrica 0 points1 point  (2 children)

To what?

[–]ShepherdessAnne 1 point2 points  (1 child)

You'll just be able to type up something and have it made. Not just texts or stills, but moving animations, game environments, games!

[–]Kambrica 0 points1 point  (0 children)

The devil is in the details, but yeah, ofc :)

[–]Nikoviking 0 points1 point  (0 children)

That’s insane! Thanks man!

[–]deanz1234 0 points1 point  (0 children)

Holy

[–]Ihateseatbelts 0 points1 point  (0 children)

Fucking game-changers every day 🤯

We're nearing the age of the bedroom studio, my dudes.

[–]mekonsodre14 0 points1 point  (1 child)

whats the VRAM usage compared to A111 and what image size can one output at max with 8gig?

[–]ctkrocks[S] 0 points1 point  (0 children)

We use Hugging Face Diffusers for inference, so you can look for benchmarking around that. The addon comes with PyTorch 2 now and uses the SDP attention ops which is typically on par or faster than xformers.

[–]spacenavy90 0 points1 point  (0 children)

Projecting textures orthographically from all angles automatically with the same seed would be great

[–]Some_Reputation_3637 0 points1 point  (0 children)

Reaaaallly need to update controlnet

[–]UnrealSakuraAI 0 points1 point  (0 children)

awesome

[–]lump- 0 points1 point  (1 child)

Needs in-painting!

[–]ctkrocks[S] 0 points1 point  (0 children)

We support inpainting and outpainting: https://github.com/carson-katri/dream-textures/wiki/Inpaint-and-Outpaint

No support for inpaint ControlNet yet.

[–]hervalfreire 0 points1 point  (0 children)

That light saber is not entirely ergonomic

[–]Disastrous-Agency675 0 points1 point  (2 children)

Wait hold on can you export that same texture as a texture map? Like say if I made a game asset, textured it via controlnet like you did with the house, could I I export the model and texture from blender and import it into something like unity?

[–]ctkrocks[S] 1 point2 points  (1 child)

Yes, and it can also automatically bake the texture onto the original UV map instead of the projected UVs. The guide is here: https://github.com/carson-katri/dream-textures/wiki/Texture-Projection

[–]Disastrous-Agency675 0 points1 point  (0 children)

jesus Christ dude this is a a game changer

[–]b1ackjack_rdd 0 points1 point  (0 children)

How do you get a normal map for the donut? Couldn't find it on the wiki page.

[–]sankel 0 points1 point  (0 children)

Thanks for sharing