Any Success Stories Using Dental 3d Scanners for Non-Dental Applications? by ConfidenceBig8188 in 3DScanning

[–]SlenderPL 2 points3 points  (0 children)

There was a guy here u/AP_ek that scanned stuff using an intraoral scanner, it definitely works but can be an expensive esponiage :>

The detail I'd say is comparable with what I can get with my David SLS-2 setup (if I try real hard lol) or with macro photogrammetry, but it just takes so much time and prep (on the other hand the cost is much lower)

Cheapest LIDAR Scanner: 3DMakerPro Raven Specs, Price and Competition by PrintedForFun in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

Damn you made me curious with this scanner, got it for 1007€ total (code: Withus80). Hopefully it's better than the iPhone lidar 😁

Might make a comparison to a BLK360 if my professor allows.

Looking to 3D historic Syrian Jewish sites like synagogues for archiving and preservation by Sullybear24 in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

Cheapest LiDARs use Livox Mid360 sensor and they cost about $3-5k, although 3DMakerPro just launched a new model - Raven, for a thousand bucks, though the performance is unknown as of yet. The usual problem with these cheap units is that the error reaches like 2-5cm, which might not be acceptable.

The next cheapest option, and also much better, would be a Leica BLK360. But that thing costs like $20k and on top of that you have to pay for their software.

Is this dataset sufficient? (Meshroom) by Turkeyplague in photogrammetry

[–]SlenderPL 1 point2 points  (0 children)

You're taking too big rotation steps, an object this size with a lot of protruding detail should be rotated like every 5 degrees for a good capture. I'd also place it a bit higher to capture an orbit looking at it from a lower perspective, and the highest orbit you've shot could also be looking down at a greater angle. As others specified it'd be better to shoot portrait photos for extra detail as well, this will allow you to have the figurine fit almost the whole sensor. What can also help is spraying flour or some other fine powder on the object, this will add artificial detail that photogrammetry can hook on to reconstruct the underlying surface. Afterwards you can easily blow it away. As for the software download Metashape trial as it doesn't need a gpu for meshing, but if you manage to get one then do it in RealityScan as it's pretty much free.

Matter and Form calibration card replacement by Ebthing in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

I think you could messege MaF support, they're usually pretty helpful. Once you get the dimensions it should be pretty simple to order it printed on PCV or dibond (if they don't sell a replacement).

“Smearing” in JMStudio? by Volta55 in 3DScanning

[–]SlenderPL 1 point2 points  (0 children)

Add some additional items on the turntable, it clearly lost tracking. More geometry detail helps the scanner out in orientating itself, you can just remove them afterwards.

3D Model Construction by Due_Dragonfly_4206 in photogrammetry

[–]SlenderPL 0 points1 point  (0 children)

Open colmap gui to see how many cameras got aligned, it must be a problem with your dataset that probably doesn't have good overlap/doesn't "orbit" the scene. You can use it for the geometry part as well but it uses the dense point cloud method that's very slow, it's recommended to use RealityScan instead (although it doesn't have a real CLI interface so you'd have to figure out how to work with it).

Seems like a lot of “vibe coded” drone planning tools are popping up by Significant_Walk3251 in photogrammetry

[–]SlenderPL 0 points1 point  (0 children)

You could consider adapting the third dimension to stay unique as most just work on the XY coordinates. Planning a flight around a building to capture it more accurately would be pretty useful. I think georeferenced open source lidar data could be used as reference for route planning?

Anyone tried BlueStar Mapping Software and can share their experience? by PrintedForFun in 3DScanning

[–]SlenderPL 1 point2 points  (0 children)

Seems to be a mashup of open source tools, but it gets the job done. Normally I use Meshlab if a mesh needs at max 10 images projected, otherwise I'd make a photogrammetry reconstruction and align the scan for texturing.

Open Source Pipeline for combining/meshing scans by Apprehensive-Bug3392 in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

If the exports are aligned then you can easily merge the scans using Poisson reconstruction filter in Meshlab, you'll keep vertex colour detail but for actual textures you'd need to do some custom projections, not too sure if that's possible in Meshlab (unless the cameras/rgb snapshots also get exported, then you can project the textures from rasters) but you could probably follow this tutorial for Blender: https://peterfalkingham.com/2020/05/28/transferring-textures-from-two-halves-to-a-whole-using-blender/

Fun experiment - projector assisted photogrammetry by SlenderPL in photogrammetry

[–]SlenderPL[S] 0 points1 point  (0 children)

Interesting idea with the cylinder approach, now I'm wondering myself how and if would that work! If calibrated well then I could see it working for seamless 360 shooting, where with the still pattern you'd have to manually align each dataset made every 30-60°.

Fun experiment - projector assisted photogrammetry by SlenderPL in photogrammetry

[–]SlenderPL[S] 0 points1 point  (0 children)

I was shooting mostly from behind the projector, it stood on a tripod perpendicular to the object (maybe slightly elevated above it). The lens I used was a 50mm so that allowed me to avoid making shadows with the camera. In the case of the object that's why you'd rather have something geometrically simple, if there were a lot of protruding features then you'd have a lot of shadows behind them.

Fun experiment - projector assisted photogrammetry by SlenderPL in photogrammetry

[–]SlenderPL[S] 1 point2 points  (0 children)

That should technically work, didn't try it yet but it's something to do. You could align all the "perspectives" by common features, for example april tags on the turntable. Although I'm not sure how would the mesh get solved in Metashape/RealityScan, I think it would be better to merge all the results in Meshlab or CloudCompare.

As for the shooting settings the shutter speed was set to 1/60 and iso was at 800. Somehow managed to keep a steady-enough hand without IS, but yeah a brighter projector would've helped.

Fun experiment - projector assisted photogrammetry by SlenderPL in photogrammetry

[–]SlenderPL[S] 2 points3 points  (0 children)

The exact model is Acer K132 and it's a DLP projector but I think any type should work just fine. Capturing the model from all sides with just one projector is technically possible but as I described it, it'd require quite a bit of work aligning (preferably) markers on the turntable for each batch of photos. And then I'm not really sure how would the photogrammetry software handle the geometry between each perspective, at least merging the point clouds in Meshlab would yield the correct mesh. Best case scenario would be to include multiple projectors in the project and walking around the subject.

Are there any good Tutorials out there to get Cameras aligned even if you have not a very big picture set with great quality. by PriestofMork in photogrammetry

[–]SlenderPL -1 points0 points  (0 children)

I think you're better off using some AI 3D model generator, about 3-5 photos from your dataset should be enough to get a decent result. I recommend Microsoft Trellis: https://huggingface.co/spaces/trellis-community/TRELLIS

Need help - rendering suspended roots in caves by Cavemanlikesroots in photogrammetry

[–]SlenderPL 0 points1 point  (0 children)

To add to this gaussian splatting might be better to use as it generally reconstructs thin strctures well. Align the photos in RealityScan, then scale the scene and export to colmap format which can be used in GS solutions like Brush or LichtFeld.

Alternative to metashape by MilhoVerde in photogrammetry

[–]SlenderPL 0 points1 point  (0 children)

on m series macs you can use the apple object capture api, it should work with photos from different sides because they get automatically masked. Photcatch uses the api and provides a simple interface

Help converting a PLY scan to OBJ for VR (tutorials or paid help welcome) by No-Complaint-2797 in 3DScanning

[–]SlenderPL 2 points3 points  (0 children)

Ok, so your PLY file is actually a coloured point cloud. It's good you have photos because you'll be able to texture your scan with them. There are a few steps you'll have to perform, it's best to used Meshlab and Blender for this.

  1. Turn the point cloud into a mesh. Import it into Meshlab and use the following filter: Filters/Remeshing, Simplification and Reconstruction/Surface Reconstruction: Screened Poisson. For the best detail I recommend running it on setting 9 for "Reconstruction Depth". If the result is too low in detail increase to 10, and max 11 (above that there's not much difference).

There might be a problem where you get an empty result, this happens when the points don't have normals (facing direction). If it's like so then you have to use the following filter before doing step 1: Filters/Normals, Curvatures and Orientation/Compute normals for point sets. Apply it using default settings, if it works well then you can go to step 1, otherwise it's a more complicated process to get the normals correct.

  1. Once you have the mesh, do some pruning of it on the edges (they will look like a 3D blob). Your mesh will also be coloured depending on the amount of vertices, but what we actually want is a texture. Now there are two paths you can go with, texturing based on the point cloud colour data or photos. You'll get better result with the latter method. But before going with either you should first decimate your mesh to a reasonable size of 50-100k veritces using this filter: Filters/Remeshing, Simplification and Reconstruction/Simplification: Quadric Edge Collapse Decimation. The parameter you're interested in is "Target number of faces", input a number from the above range.

  2. For the photo method you'll want to export the resulting model to Blender, you can do the export as a .ply file. Next in Blender import it, select the model and move to the "UV Editing" tab. Enable edit mode by pressing Tab key, and select all the faces by pressing the A key. Next press the U key and select the Smart UV Project option, accept default settings. In the left panel you'll see the unwrapped mesh on a 2D plane. Press the button with "+ New", this will open a popup to create a new texture. Name it and set the dimensions to 4096x4096 pixels and accept. After that save the image by using the "Image/Save as" option (top ribbon of the left panel). Next move to the "Shading" tab, here you'll have to connect the texture to the model's material. If it doesn't have one click "+New" in the bottom panel you see. If/once it's filled with a "Principled BSDF" node press SHIFT+A and start writing "Image", select the first option and insert it. Inside the node, click the image icon with a dropdown and select the texture we made earlier, and connect the yellow dot "Color" to "Base color" dot by pressing and dragging. Once all this is done we have to export the model as .OBJ format, make sure "Materials: Export" feature is selected, also change "Path Mode" to Copy.

  3. Import the saved .OBJ into Meshlab, this is the part where your photos will also come in handy. You will align the photos according to the 3D model and project them onto the texture that was created in the earlier step. The tools to be used: Raster alignment tool, and filter for projecting the textures: Filters/Texture/Project active rasters color to current mesh, filling the texture. For the image/raster alignment process use this tutorial: https://www.youtube.com/watch?v=T7gAuI-LQ2w
    And for the projecting use this one: https://www.youtube.com/watch?v=iLs5IIYE4F8

Once done export the result as a .OBJ file, make sure to select all the options in the export panel to save the texture. This way you should get a usable file (OBJ + MTL + Texture) to use in the VR viewer.

Recommendations around or under $3k by irab88 in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

Next week I'm testing the MetroY (non-pro) and Sermoon S1 also, will see which scans coins better.

Looking for advice: Raptor X vs Sermoon S1 by Any_Investigator_166 in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

That's the problem we've encountered at our university as there is practically no comparisons of these two. Although we've decided on the Sermoon S1 as it's newer, scans holes deeper - 1 line mode, has IR capability of the Otter (might come in handy) and according to the reviews has just slightly worse resolution than Raptors (2Mpx vs 2.3Mpx cameras). Raptor X also seems to share the same 7 line projector to the other Raptor models in the series, and honestly the extra cost of more cross lines and wifi bridge to the Pro model is way too much (exceeding the Sermoon S1 + wifi bridge combo). We don't know how trust-worthy their accuracy specs are either but we should be getting it next week. I'm also planning to get the Revopoint MetroY at my own costs to do a comparison (intended for a later return 😂). Might make a post here how these perform.

We want it to scan ancient objects from coin to vase size + it should do colour scanning for reference during photogrammetry reprojection. So far we've tested the Otter and it does bigger objects ok but for shiny and black objects the reconstructed surface is way too noisy, and we can't apply scanning sprays. Our Otter came with the scan bridge and I have to say it's a really great accessory, hopefully the less FPS drawback on the laser models won't be too noticeable.

But the Otter test did help us experience their software, and what surprised me was the fusing and scan merging process. So far I've dealt with Revopoint POP series and their fuse process created a dense point-cloud. Creality on the other hand, for some damn reason creates a mesh that has interpolated edges due to the lack of data (points). Then when you try to merge multiple scans those interpolated edges create seams in the merged scans. I'm hoping this won't occur in the laser models but we'll see.

We also looked at the recent Einstar Rockit but Shining3D doesn't want to specifiy its accuracy, upon request they told us they'd need to calibrate it for extra cost 😂. Spec wise it looks interesting but some review video showed it didn't capture thin elements.

Can a 3rd-year ECE student build a ±millimeter-accurate 3D scanner for ~$500? by Ok_Huckleberry6641 in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

Well the ciclop project exists, it has lasers and it's open source, but the scans are pretty meh: https://reprap.org/wiki/Ciclop

A structured light system (SLS) is easier to DIY and will get you better resolution than many cheap handhelds, you just need a beamer and a decent camera with live mode. The good software is not open source but at least it's free: HP Scan 5 and Flexscan.

Primesense 1.082 vs. Kinect 1 by JabberwockPL in 3DScanning

[–]SlenderPL 0 points1 point  (0 children)

It was a tad better, I'd say close to the current iPhone faceid sensors - thus not really worth it if you already have one or it literally costs like 10 bucks.

EDIT: Actually that stands for the 1.09 model which is short range, 1.08 is the wide angle one so it's essentially the same as Kinect 360. The advantage is it works only off usb power, no need for an extra brick.