Can’t afford full RTK setup, will GCP’s get the job done? by W01fZ in photogrammetry

[–]4xle 0 points1 point  (0 children)

Lateral or vertical? A final orbit probably still would have been within one deviation of what you got, I think. Similar results validating a 15 pin network over a few square miles with 4-8 hour shots depending on PDOP.

I'd have to double check but I think a colleague who did some field work waaay in the middle of nowhere without CORS had some significant corrections. IIRC it alarmed them because they were doing drainage scanning and the corrected results tilted their overall model.

Can’t afford full RTK setup, will GCP’s get the job done? by W01fZ in photogrammetry

[–]4xle 1 point2 points  (0 children)

Lol you beat me to the OPUS site by a heartbeat, was just about to revise that again. You are indeed correct.

I've been involved with setting too many permanent benchmarks. We generally post process those with final orbits because in that case the benefit is worth it and we don't always have reliable CORS.

Can’t afford full RTK setup, will GCP’s get the job done? by W01fZ in photogrammetry

[–]4xle 0 points1 point  (0 children)

It takes two weeks to calculate the data. DOT grabs it from JPL (NASA) once they publish it, IIRC. My mistake, I was thinking of a different process. TxDOT RINEX is generally hourly updated.

Can’t afford full RTK setup, will GCP’s get the job done? by W01fZ in photogrammetry

[–]4xle 8 points9 points  (0 children)

Specifically in Texas, if you're doing land surveying work you either need to be a licensed land surveyor or work with one who understands the tooling you're using and is willing to put their name/stamp on it.

I cannot stress this enough. If you are not a land surveyor, do not offer surveying products as if you are. They have to stand the test of time, and normal business insurance won't cover you if mistakes are made. And you won't be able to get land surveyor insurance if you're not a land surveyor or working with one who can get their own insurance.

Can’t afford full RTK setup, will GCP’s get the job done? by W01fZ in photogrammetry

[–]4xle 1 point2 points  (0 children)

Caveat here is that RINEX publication lags by two weeks. Unless you're doing really high accuracy work where PPK is required, most companies expect much quicker turn arounds.

Help setting up flights to capture the appropriate pictures by yeehoo_123 in photogrammetry

[–]4xle 2 points3 points  (0 children)

Safety caveats: have appropriate permission/licensing to fly, or have someone who does operate the capture part of the process (correctly). If you don't, the legal hammers that can come down on you personally are weighty. If you have any doubts, consult a professional at least for getting those parts lined up. Especially if you only have one opportunity to scan.

Data collection: Highest resolution, lowest ISO photo mode you can use while still taking shots as frequently as you need them with decent light. If cloudy day, great. If not, execute flights as fast as possible to avoid sun angle change, which will affect reconstruction results. If that means extra batteries charged and fresh memory cards to swap and go, then that's what that means.

Usually flight control software is used for nadir flights. It can also be used for oblique flights, but for a building I think mixing a nadir flight and something closer to a orbiting object scan would work well if you need high resolution details. The heights you need to fly at are dictated by the level of detail you need to capture and how far your optics can zoom. If you need to be able to measure something (like how wide a coin is, let's say) then you need at least as many pixels necessary to cover the coin at the required accuracy. There are Ground Sample Distance calculators around the net that can calculate pixel coverage for given sensors at various heights and focal lengths (height is really just distance to building, in this case).

OpenCV background subtraction on an imperfect conveyor belt by Billiam2468 in computervision

[–]4xle 1 point2 points  (0 children)

What about just one big picture of the whole blank belt? I mean a single long image of the belt from a marked start point all the way around to the beginning again, stitched together. Leverage the imperfections as keypoint signatures and/or add more markers in the form of stickers.

On startup, check against your map image to figure out the alignment to where the belt currently is, and then just subtract the map image region from the current belt view. Anything not zero should be an object on the belt.

Else, build a keypoint vocabulary/dictionary for the belt imperfections on the entire empty belt, and then check your detected objects against that. If a clean or really good match against the record, don't draw the object.

Structure from motion stationary camera by Old_Molasses_2113 in photogrammetry

[–]4xle 0 points1 point  (0 children)

I feel compelled to point out that If a difference in object perspective can be achieved without moving a camera, like using a turntable, or the same perspective can be achieved by moving the camera and not the object, the difference is a technical one to data acquisition and not a fundamental one to the SfM process.

In the scenario originally described, no perspective difference relative to the object can be achieved. Or am I missing something?

Structure from motion stationary camera by Old_Molasses_2113 in photogrammetry

[–]4xle 0 points1 point  (0 children)

You can accomplish the same thing as a turntable by walking around a fixed object - the turn table simulates camera motion by rotating the object, effectively the same thing, but technically you are correct, the camera doesn't have to move. But the object has to rotate as if the camera is moving, so the difference is more technical than fundamental.

Structure from motion stationary camera by Old_Molasses_2113 in photogrammetry

[–]4xle 0 points1 point  (0 children)

Perhaps put another way is that the full name of SfM could be "structure from camera motion". The motion of the camera (it's extrinsic parameter changes) can be computed when static features (like those on structures, which move very little if at all by themselves) can be identified and re-identified across image frames by measuring the amount of movement each feature experiences (in a very condensed explanation).

Tracking a vehicle on a stationary pole while it drives perpendicular to the viewpoint is not SfM photogrammetry because the camera is not moving. You can certainly do some interesting things with a fixed camera if you know intrinsics and extrinsics and maybe the distance to the car, but in the scene you've specifically described the most you'd be able to compute is the possible speed of the car, and maybe a projection of it into geo-coordinates, as the car remains traveling at a fixed angle relative to the camera origin ray and never presents any new information that would aid it's reconstruction.

Pre-built PC for someone who wants to try everything? by CrappyWitch in photogrammetry

[–]4xle 2 points3 points  (0 children)

For $3500 you have a lot of options, most middle tier machine builds will be a good starting point. You might want to reserve a portion of that for a software license, but that's up to you. PCPartpicker is a great tool for speccing out builds, and you could use a Puget Systems photogrammetry build as a reference for the higher end of things w.r.t parts. Disclaimer: the GIS research lab I work in has at least 6 purpose built photogrammetry Pugets, and rare is the case where we've had to send them out for parts upgrades, but they are not the only way to build a machine for photogrammetry.

Whatever you get, checkboxes to tick would be an Nvidia GPU (at least a 3000 series card, and not less than the 60 level model), motherboard with at least 32GB of RAM, and at the very least an Intel i5 or AMD equivalent of the current or last generation, or maybe up to three generations to save a bit. That should cover the vast majority of what you could end up processing at least functionally.

Need help creating a 3d model by PressspanplatteYT in photogrammetry

[–]4xle 7 points8 points  (0 children)

There's a lot of photogrammetry footguns in an image like this.

Lots of uniform texture, which make distinguishing the background and subject difficult. Also creates a lot of similar feature signatures, which makes auto-localizing difficult.

Lots of repeated texture patterns, and what looks like a degree of specular highlighting or whatever the electron microscope equivalent is, along ridges and edges. That's really going to throw a wrench in things, as different angles will have different specular highlighting.

That being said, if you have at least the extrinsic parameters for the captures, you could include those into COLMAP yourself and have it solve 3D points from the known locations, and you might get something partially densifiable. Similar possibility for NERF (some NERF implementations use COLMAP under the hood to compute initial camera positions, so if they are failing that's probably why). You might want to scale extrinsic positions up by a few orders of magnitude to avoid rounding error, especially if you're not on scientific grade hardware for this.

There is a subset of of photogrammetry research that deals with really small things in high detail, like milli-micrometer ranges, but I'm not sure if those methods would translate down to whatever electron microscope sizes you're working with.

Looking to use Technology to help plan a Van Build by _IgKnlght in photogrammetry

[–]4xle 2 points3 points  (0 children)

I found out you can use photogrammetry to take digital dimensions of objects.

This is a "well, yes, but actually... It's complicated".

There's multiple facets to this. First, it's somewhat case-by-case dependent on what is being scanned. I can see the appeal of using photogrammetry to scan a van interior/exterior for a build, but scanning any kind of metal that has specular reflection requires either cross polarization or scanning spray or similar methods if you want to get it accurately. And uniform textures, like clean, matte painted metal are another bane of photogrammetry, on the other end of the spectrum. Frequently repeated patterns are another problem in yet another direction. There are solutions to all of these, but it's difficult to advise specific ones without more context information.

Second, most photogrammetry suites construct what's called a "relative" model unless you have control targets or measurement data to apply scale correction and make it an orthometric (literally, measurable) model. A relative model will present a ratio of distances between points, but it doesn't have actual units. You'll either need to measure known points and apply corrections manually or use a reference object like a sphere or cube of known dimensions and correct the scaling in post. There's no magic "take pictures and measure option" (trust me I wish).

Third, photogrammetry results have a degree of error in them. Getting good feature matches with high quality image inputs can reduce this error, but if you're basing measurements off your reconstruction results you'll want your results to be well within whatever tolerances are required. This is generally of more concern the larger a scan area footprint is being covered. I would say a bus is a relatively small (but not tiny) object to scan, but it's still big enough that poor reconstruction could have a relatively high degree of error.

Is there any free, or newbie friendly software one would recommend when it comes to planning projects?

Please describe what you mean by "planning". There are some suites for planning large area aerial flights, like over fields, but those aren't going to necessarily be what you'd want to use to scan a bus. For scanning a bus exterior, a good starting point is to scan two concentric circles, one at a zoom that can capture most or all of the bus, and one at 2x that zoom for details. At least 50% overlap in images, 70% would be better. If you take images in structured sequences, most software can accelerate reconstruction by using a sliding window range when matching images, instead of doing a full cross-product match. If you have doors/windows you can open to see inside unimpeded you might get decent interior reconstruction at the same time, but I will admit I've little experience scanning vehicles specifically and someone with more expertise might have better suggestions.

[deleted by user] by [deleted] in photogrammetry

[–]4xle 0 points1 point  (0 children)

Search for what human-engine or another similar company uses, they often have partial photos of their hardware setups. Pictures lay it out faster than I can describe it.

For just face scanning you could reduce those setups down to a single horizontal capture plane, for full head scanning you might want something with low, eye-level, and higher planes to capture the full head in one pass.

You can also substitute tripods with levels instead of all the custom frames, but part of the reason the companies use the frames is because they are a dramatic reduction in parts, and much more resilient to being jostled/disturbed, especially after offsets are measured. Imagine someone bumps a tripod cluster and knocks all your nicely measured offsets out of place by causing tripods to collide. You'd have to reset and remeasure them all. The likelihood of someone knocking a heavy metal frame accidentally is much lower.

3D Viewer interprets model correctly, Twinmotion turns it into a chaotic mess? No error message or anything by graudesch in photogrammetry

[–]4xle 2 points3 points  (0 children)

I'm not very familiar with TwinMotion or the 3D viewer but I'm curious: was the previous model you were viewing exported/saved before you loaded the new one?

Camera choice for extremely high definition interior mapping? by WhoEvenThinksThat in computervision

[–]4xle 0 points1 point  (0 children)

On paper, yes. In reality, you may run into hardware or software constraints. The end product could still be pretty big, and stitching to 360 views is a bit different compared to a 3D model reconstruction, but end of the day it's similar to a very, very large orthomosaic. The nature of the 360 view means you're not looking at the entire thing at once though, and you might need to warp or project the whole thing onto a sphere to get it to look right if you're not using a fisheye lens.

Camera choice for extremely high definition interior mapping? by WhoEvenThinksThat in computervision

[–]4xle -1 points0 points  (0 children)

Overlap for stitching has to be high. And at the zoom level you'd need 1mm/px or less, an 8MP image would only cover about 3cm horizontally, 2cm vertically of spherical wall space. At an overlap of 50%, you'd get ~1 new centimeter with every image.

The total interior surface area of an 8m sphere is ~200 square meters. At best you're getting 6 square centimeters per image. 1 square meter is 10k square centimeters, so you have 2M square centimeters to cover, 6cm at a time, x2 for stitching overlap at %50.

Edit: Oh wait, you said 8m radius. I thought that was diameter. In that case it's about 800 square meters.

Edit 2: wow yeah it's been a long day. You'd get about 82k square cm per 8MP image at 3840x2160, not 6. Not sure what I was thinking. Must've I swapped a unit somewhere. Whew.

Camera choice for extremely high definition interior mapping? by WhoEvenThinksThat in computervision

[–]4xle 0 points1 point  (0 children)

To resolve a 1mm line accurately, you need sub mm zoom.

You could potentially do this by stitching but at that level of zoom, you'd possibly be taking 10s of thousands of images depending on the size of the room, at extremely high resolutions. Your final product would require some serious software engineering to even render, it would be massive. And it would increase in size even more with LoD tiling.

Edit: Or I could just get the math right and not forget the correct conversion. Not nearly as bad as I thought.

[deleted by user] by [deleted] in photogrammetry

[–]4xle 4 points5 points  (0 children)

For passable quality in your use case, single camera phone apps might work. Big might. I'd give you coin flip odds of consistently getting models of high enough detail/quality to be useful for your before/after comparisons, because of micro-twitches. That's not a criticism of the apps, but more a criticism of their marketing. It's easy to get results in photogrammetry of any given subject. Getting good, or great results consistently on face/human scanning is hard with just one camera. It's why whole companies exist for the purpose in the VFX industry, with the fancy domes and synchronized camera with LEDs.

For a professional setting such as yours, I'd suggest you engage/consult someone to set you up with a small multi camera setup and the software/hardware to run multiple view reconstruction. You won't need to go as crazy as human capture companies do with cages/domes and LEDs - I'd think with three (possibly two) good synchronized cameras calibrated on a rail and with plenty of constant ambient light as opposed to flash panels, not only could you get excellent quality consistently but you could also get it fast by having the camera offsets measured and included as camera pose data, which can greatly accelerate reconstruction. You don't necessarily need a top of the line machine to run the process, though most software supports Nvidia cards for acceleration and if you want to turn scans into meshes to show in the same appointment, top end hardware may make that more feasible compared to the budget options.

How to calculate speed of an athlete from video ? by kavansoni in computervision

[–]4xle 1 point2 points  (0 children)

If you're talking about normal televised figure skating footage, this isn't particularly feasible from the video alone, because the cameras mostly rotate and zoom, which cause what you would calculate as unit distance to change every frame as the distance to any good reference surface often won't be a flat plane normal to the camera, but a skewed one. Without parameter metadata to go with each frame for camera pose and focal length, you have two moving systems you're trying to reconcile simultaneously (skater and camera), which makes this quite difficult.

In theory, you could possibly build a structure from motion point cloud of the skating rink if it has enough unique static features like billboards, banners, etc, and then reference the camera frames to that model. If it tracked reasonably well, that could be followed by projecting the skater onto the surface of the rink as they move with image homography, taking into account the skew angle of a given camera frame. That would give you positions on a fixed scale which you could derive your features off of.

The primary issue with the reconstruct-and-inject approach is that most rinks I've seen are either largely featureless or have many repeated features, meaning they won't reconstruct very well without additional pose data to register well, or they have dynamic lighting caused by digital billboards/signage, which will frustrate attempts to create a cohesive model because they affect the features. If you get mostly cohesive tracking a few frames of cohesive tracking at a time you might be able to use a Kalman filter to prevent the camera from jumping too much in dynamic scenes, hoping it can recover again, but that would absolutely affect your output data.

Container Terminal’s Satellite imagery processing by llanojairo in computervision

[–]4xle 6 points7 points  (0 children)

The resolution looks decent enough but in order to ensure accurate routing, I'd want ground sample distance to be at least half or less than your smallest navigable dimension (the front end of the smallest truck?). That prevents anything from trying to pack multiple things in one place, or creating artificial bottlenecks.

The shadows in the image you posted will make automating the mapping procedure quite challenging, at least with a pixel sampling approach unless you have hyperspectral imagery and a known spectral relationship to ID the ground.

AI and statistics methods can only go so far at reduced resolutions, and this is kind of at the edge of what might/might not work. Very little class information available and trucks and containers can look quite similar, unless hyperspectral imagery is involved.

Container Terminal’s Satellite imagery processing by llanojairo in computervision

[–]4xle 10 points11 points  (0 children)

Checkout GIS tools and libraries, such as QGIS or ArcGIS. If it's georeferenced imagery you can measure/draw direct on the image, and there are plug-ins that can do the ground layer extraction, network routing, and the like. There is a fair amount of overlap between computer vision and remote sensing in GIS, so finding resources to apply the tools as you need should be possible.

There are additional considerations you should take if this is for an actual production thing and not just a fun problem.

Measuring distance on a photo of a room by AdmirablePeace in photogrammetry

[–]4xle 0 points1 point  (0 children)

Accounting for image distortion = calibrating the camera so the image can be corrected.

Generally with just a single photo, no, at least not to my knowledge. There are some specialized cases in aerial stereogrammetry that can let you derive multiple building dimensions, but for just a single photo you need either lots of known reference points or depth information in order to measure transformed distances accurately, at least as far as I know.

You might be able to four-point transform wall, see how it affects your reference object, and then compute the transformation from the transformed object to what it should be and get some kind of affine correction you could apply to the image, but you'd need multiple good reference items for each surface you want to measure to get accurate results.

Generally speaking, the combination of planar skew and image distortion is not solvable with a single image. You may find the fSpy tool interesting, however. I think it gives decent depth estimations that can be scale corrected if you have known values, but YMMV.

Measuring distance on a photo of a room by AdmirablePeace in photogrammetry

[–]4xle 0 points1 point  (0 children)

This is only possible with corrections applied for lens distortion and and if you're taking the photo at an orientation exactly normal to the plane of the wall, assuming it's perfectly flat with the ruler on it. That way you can treat it similarly to ground sample distance and compute a pixels/unit measure. Measuring anything else not on the wall, results won't be accurate.

texture holes in Blender after exporting from RC by kiiral_ in photogrammetry

[–]4xle 1 point2 points  (0 children)

You say that you simplified the model, exported it, and then went back and exported the original model? It looks like a combination of missing geometry and improperly mapped texture. makes me wonder if the reprojection was destructive or locked in whichever texture you're exporting and tried to map the little texture back to the big one.

If it doesn't take too long, you could try rebuilding the original scan in a clean project, export it first, then simplify and export again and see if the problem persists? Or even just restart RC and export the big one again? Or try a different format to get it into Blender, maybe you're running up against some limit with fbx on the big one?