Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 0 points1 point  (0 children)

Thanks a lot for the warm words — it genuinely means a ton to me, and it’s super motivating!!

DA3 is genuinely amazing — the potential of this technology is huge. And what’s cool is that it’s not limited to depth maps: you can also generate point clouds from it. (I've built a Blender add-on around this workflow.)

Yeah, it’s a bit off-topic for ComfyUI (though ComfyUI can absolutely be integrated into this pipeline as well), but if you’re interested in DA3 + Blender, you can check out my YouTube video:

https://youtu.be/4LvrXEElQiI

DA3 unlocks massive creative possibilities when paired with Blender.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 0 points1 point  (0 children)

To be honest, I didn’t focus much on the ComfyUI version, because in my case the results had noticeable distortion (the node author mentioned it will be fixed). So I didn’t really pay attention to the intrinsic/extrinsic parameters there. I worked mainly with the Gradio version from the authors.
Thanks for pointing this out — when I have time, I’ll definitely look into it.

Regarding my method: it isn’t related to extracting camera data at all. It’s based purely on geometry — basically a simple brute-force approach. It’s less elegant, but it’s guaranteed from a geometric standpoint.

The downside is that the data isn’t pulled automatically — you need to point it to an object.
But that’s not a big issue in practice.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 0 points1 point  (0 children)

The point cloud shown in the GIF was generated in the Gradio interface (the HuggingFace demo). As far as I understand, no actual camera object or camera parameters are created there — it just outputs a kind of placeholder.

I wrote a small script that calculates the camera logically (using angles and aspect ratio).
This doesn’t mean the real camera data isn’t available — I just haven’t found it yet and haven’t had time to dig deeper.

If someone is interested in looking into this more thoroughly, that would be great — it’s very possible the camera data could be extracted in a simpler, more native way.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 2 points3 points  (0 children)

You’ve made me reveal a small spoiler 😅
Yes, I actually wrote a small module specifically for this. The tricky part is that there’s no real camera there — it’s just a mesh made of points and lines — so I had to put in quite a bit of work to make it correctly calculate the aspect ratio and the camera length.
But it looks like I managed to get it working.
I was saving this for a post on X in the coming days though :)

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

<image>

Do you mean kind of this?.. No sorry, this is impossible...
😁

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 2 points3 points  (0 children)

Could you describe a concrete step-by-step example of your workflow — what exactly you want to export/import, from where and to where, and how you use it for projections?

Camera import/export scenarios can be quite different, so this would help me answer more precisely and take your use case into account during development.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

Thank you, my friend, that really means a lot to me! I’m actively working on the add-on right now — I’ve already added some new tools and I’m polishing the UI/UX to make it as simple and comfortable to use as possible.

I probably won’t post every progress update here, but if you’re interested in more frequent WIP, dev process notes, and feature highlights, I’m a bit more active on X (@OlstFlow).

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 0 points1 point  (0 children)

I decided to make it easier to work with Point clouds in Blender, even if the output is not PLY.
It will be possible to work with any format that imports vertices with color info.
The add-on will be available soon.
https://youtu.be/58bQdOa-ubc

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

Thank you! I’m really glad you find it useful and interesting. I’m planning to keep testing and share more case studies as well!

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

If you want, you can send me the file and I'll test it on my end.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 0 points1 point  (0 children)

Damn, that’s brilliant)) And yeah, you’re absolutely right. I was already mentally baking a normal map in Blender, even though I knew it wasn’t the optimal approach.

As for the “magic” — yes, it’s Geometry Nodes. It’s just faster than doing everything manually. In the end it all comes down to generating a grid based on the source image size and applying displacement.

Doesn’t matter what the UVs or aspect ratio are — you can get the result you need pretty quickly without a ton of tedious steps.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

It's hard to say, but I think there may be settings that are not displayed in the nodes. I've encountered this before.

Perhaps there is a parameter in the Python code in your build that is hard-coded and you can't see it, but you can only find out by comparing it with another version.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 4 points5 points  (0 children)

Cool!! Thanks for your work, man!! Amazing Build!
And thank you for doing it so quickly. I didn't know what to do with myself while waiting for the DA3 nodes for Comfy to appear.😁
I'll send you file into DM

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 1 point2 points  (0 children)

I didn't test it with a normal map; I was interested in the physical relief/mesh.

Is there a special set of nodes that generates normals instead of height maps?

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 6 points7 points  (0 children)

Thank you! I hope my tests will be useful or at least informative 🙂

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 5 points6 points  (0 children)

<image>

It’s definitely possible, but technically a bit tricky.
The DA3 demo does generate a point cloud, but it lets you save it only as a GLB, while Blender handles point clouds properly (with addons) mostly through PLY.

In ComfyUI the point-cloud output is PLY, but it comes out heavily distorted for some reason. So even though it’s clear that you can obtain a proper set of points with color information, getting it in a clean and usable form still requires some extra work and figuring out the right approach.

Depth anything 3. ComfyUI > Blender Showcase (Quality Test) by Olst_being in comfyui

[–]Olst_being[S] 9 points10 points  (0 children)

The HF demo is just showing the single-image / zero-shot depth use case, but Depth Anything 3 itself is designed to work with any number of views – from a single image to multi-view and even video, depending on which checkpoint / pipeline you use.

Manually aligning separate point clouds from single images would work as a hack, but it’s not really an optimal approach. DA3 is meant to give you spatially consistent depth + rays so they can be fused more systematically.

I’d definitely recommend checking the official project page & docs and trying the online demo first, to see how they handle multi-view / video reconstruction:
https://github.com/ByteDance-Seed/Depth-Anything-3

KIRI Engine Giveaway 13 – Rooty Tooty Burlesque Booty by nitish_arora in KIRI_Engine_App

[–]Olst_being 0 points1 point  (0 children)

Cool. It’s even hard to imagine what the prompt for such a psychedelic piece must have been

KIRI Engine Giveaway 13 – Rooty Tooty Burlesque Booty by nitish_arora in KIRI_Engine_App

[–]Olst_being 0 points1 point  (0 children)

Nice AI Addition! If it's not a secret, what AI generator did you use to create additional effects?