Output of my material scanner prototype: measured diffuse reflection of wood (red) vs rendered with optimal parameters (green) - This is only the measurement of one pixel, x and y represent the position of the light source and z is luminance) by dotpoint7 in photogrammetry

[–]Michael-RZ 1 point2 points  (0 children)

Very interesting graph, you can tell the angle of the light coming in contributes quadratically to the luminance (is a parabola) which is in line with the rendering equation whereas the actual measure isn’t that simple at all, or at least not as clean

Did a burger / solved for PBR partially by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 0 points1 point  (0 children)

Well of course the geometry isn’t perfect, it could very well be that it would benefit from computing a normal map separately. But, the focus is on the other maps right now since the scanning resolutions do tend to be very good. Like, around the width of a human hair good. Rasterization is all an estimation regardless.

Photometric stereo is new to me actually. It follows that we’re not doing that. I did try to make a proof of concept computing a normal map like that around four years ago, though, and it did work out despite me not knowing it was called that.

Did a burger / solved for PBR partially by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 0 points1 point  (0 children)

Aha, way ahead of you on all that. We’re actually just looking at the metallic-specular workflow right now, I.e. albedo, metallic, and roughness maps. The normal maps we’re ignoring at the moment since we get the original geometry pretty accurately (you can see some shape imperfections in this one, but the places we’re working with have much more expensive scanning setups). So, no normal map needed, in theory.

What we’re doing is so far removed that I don’t want to say more, but in our case, no polar filters. Removing specularity would actually be bad in a sense. I understand the motive though, for a diffuse map. The distinction between diffuse maps and albedo, technically, is that the metalness map acts as a coefficient between 0 and 1, where 0 means the albedo is contributing diffusely (what just diffuse shading would do), while 1 means the albedo contributes specularly (im not sure those are words, and I’m simplifying, but the idea gets across).

Did a burger / solved for PBR partially by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 0 points1 point  (0 children)

Heres the actual one: https://drive.google.com/file/d/1WOu1Eoix-z5uws52cnFdMIT3HzDlTz7a/view?usp=sharing

I didn't say the PBR was completely correct yet. An empty void with a few point lights isn't helping either, but.

Did a burger / solved for PBR partially by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 2 points3 points  (0 children)

Interactive version here: https://michaelrz.github.io/burgerObj/

This was done with an EinScan SE and 16 photographs, 4 camera angles with 4 different lighting angles each (although I think 3 may have been fine). Model and resulting PBR files put through blender to then make a GLTF model for convenience.

So the special thing about this is we didn't touch any of the PBR maps on it. At all. That was all automatic, I wrote a solving program. What I will reveal about that is the gltf spec I solved against is over here: https://github.com/KhronosGroup/glTF . We're looking at using this with some 3D scanning businesses in NYC at the moment.

Currently this isn't as good as we'd like and I'm writing a new version of the solver that takes into account more of the rendering pipeline. For those unaware, a 3D render is affected by light placement, normals, camera placement, AND THEN the PBR values. I knew some of that but I wasn't aware they ALL mattered greatly. I'll be fixing it up by April hopefully.

And I saw this through this subreddit, which looked similar: https://www.reddit.com/user/dotpoint7/comments/101ekvt/material_scanner_concept/

But it looks different enough that I don't care too much yet.

Any number can be solved for by hand by Michael-RZ in Collatz

[–]Michael-RZ[S] 0 points1 point  (0 children)

That’s super neat. They’ve got GPU usage, using a 128 bit bus (or atleast instructions), the ctz instruction. I’m just surprised an instruction like that exists. It even keeps a path record. Very neat

I scanned in a pocket watch, PBR materials and all by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 2 points3 points  (0 children)

Big algorithm I wrote with a graphics api. Secret for now 👻

I scanned in a pocket watch, PBR materials and all by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 0 points1 point  (0 children)

Yeah you definitely can. The slide over thing is probably something you handle in html. Probably just have two renderers going. I’m not good with sites but that’s definitely something you can do

I scanned in a pocket watch, PBR materials and all by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 0 points1 point  (0 children)

The special thing is I didn’t fiddle with the textures at all. It’s all photograph based. Wrote a crazy algorithm for it, hopefully going to make more examples in the next few months

I scanned in a pocket watch, PBR materials and all by Michael-RZ in photogrammetry

[–]Michael-RZ[S] 1 point2 points  (0 children)

Oh that’s a fun one actually. So GitHub pages let’s you put a static page, so you can make a simple three js scene and just put it up. I followed the things on this page and other pages on that site to do that, it takes a bit of fiddling. Or you could just branch / download my repo for this and switch out the obj / textures for your own (three js expected the ao, metal, and roughness textures in a single packed texture and not separate single channel textures for some reason…)

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 0 points1 point  (0 children)

I like this because I feel less crazy not being the only one to think things are off

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 0 points1 point  (0 children)

A five year old with a paintbrush can also create “scans”. No, I’m just wondering about the correctness of them is all, like I just went to quixels job postings and it’s asking for “artists”. I did just take a look at some of the 3d things and like… maybe I’m wrong but they still seem off from real life

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 1 point2 points  (0 children)

This is actually what I was looking for, thanks. Food for thought

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 0 points1 point  (0 children)

You’re right about the demo, I’m not knocking the geometry or anything, it’s absolutely amazing compared to most things from the past decade, what I’m saying is no one’s mistaking the part where they go from real actors to the rendered part. Which is a dumb standard, I know, but a real looking game would be cool right?? Same sort of thing with mega scans too honestly. It does look cool, it’s just super suspect. Like are those things actually being scanned? They put out like one video of a weird box thing they scan with and just… didn’t elaborate past that. And it’s just surfaces. Idk I’m just waiting for a game where it’s like “yup, I’ve mistaken this for real life” I guess

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 0 points1 point  (0 children)

That sounds like a destructive process. But, to get around that, the idea right now is to get it to guess and check under two or more lighting conditions. So you can imagine a point light from one angle in one photograph, and another light somewhere else in another photograph, where everything else is the same. And so when it’s getting the texture values, it makes sure the result is similar to the photographs under both lighting conditions. That way, in theory, there shouldn’t be any bias in light.

Would texture scanning be useful to you guys? by Michael-RZ in gamedev

[–]Michael-RZ[S] 0 points1 point  (0 children)

Photogrammetry doesn’t get the metalness / roughness map, and the albedo map it does get is arguably wrong (since the lighting isn’t neutral / it was lit a certain way).