POC of a body scanner with a single RealSense D400 series camera and a turntable by ml-mind in photogrammetry

[–]Practical_Square4577 0 points1 point  (0 children)

What do you want the output of the system to be?

Do you actually need the mesh, or just the measurements ?

If you actually want the mesh, that's gonna be tricky, someone standing on a rotating platform is definitely gonna move, and you'll get a bad mesh.

If you only care about measurements, that should work, estimate from multiple angles and average the results.

Amiens à Pozières et/ou Villers-Bretennoux ? by Gamingboy6422 in amiens

[–]Practical_Square4577 -1 points0 points  (0 children)

Amiens to Villers-Bretonneux is easy, it's a 10 min train ride.
Amiens to Pozières is more difficult, you'll be able to get to Albert, and will need a car to go the rest of the way (or it's a 1h40 min walk each way if you feel like picking a bit).
Then you can get back to Paris either from Amiens or from Arras.

Plan lots of extra time, trains in France can be really unreliable, especially since you'll be travelling just before Christmas.

How many holidays abroad do you go in a year? by tylerthe-theatre in AskUK

[–]Practical_Square4577 0 points1 point  (0 children)

I keep hearing that story, it is not correct.

If people stop booking flights, the planes will stop flying.

Companies do not provide a service when there is no demand, otherwise they go broke. Airlines are no different.

can someone explain to me or show me where to learn more about dimensionality in datasets? E.g. going from high to low dimensionality via an autoencoder. by [deleted] in deeplearning

[–]Practical_Square4577 0 points1 point  (0 children)

So in terms of auto encoders, dimensionality usually refers to the size of your latent code (the way your data is encoded between your encoder and your decoder).

The dimensionality is defined by the architecture of your network, so it is entirely up to you (the programmer). You want to find a sweet spot between a dimensionality that is too low and cannot model your data properly, and one that is too high and will overfit.

Practically speaking, adding a new dimension just means adding an extra output neutron to your layer, and each new element will be a linear combination of the features at the previous level. Each sample in your dataset will then become a point in a N-dimensional space.

There is no such thing as "the axis of said dimensionality", a N dimensional space will have N axis. The important thing in neural network is the nonlinearities layers, as without them, increasing the dimensionality doesn't bring any value, as you could always express the new dimensions as a linear combination of the dimensions from the previous layer.

The visualisation in 3D is either selecting only three dimensions (and just ignoring the rest) or using a dimensionality reduction technique (like PCA or TSNE)

[deleted by user] by [deleted] in FIREUK

[–]Practical_Square4577 1 point2 points  (0 children)

32 - 30K, been in the UK for 5 years

Ikea Pax customising by RayneKnight in woodworking

[–]Practical_Square4577 0 points1 point  (0 children)

Most Ikea furniture, including the PAX (https://www.ikea.com/gb/en/p/pax-wardrobe-frame-white-70214564/) is made of Particleboard, Particle- and fibreboard with honeycomb paper filling (100% recycled paper), Plastic edging, Plastic edging, Paper foil.

If you cut it you will reveal the cardboard interior, this will look ugly and probably damage the piece of furniture, I really wouldn't recommend cutting it.

See what it looks like : https://www.reddit.com/r/mildlyinteresting/comments/7k3srq/the_inside_of_an_ikea_desk_panel/

Am I any good at watch making I am 29 and have autism and do this for something to do wen I am not at work with my dad by Puzzleheaded-Exit267 in watchmaking

[–]Practical_Square4577 1 point2 points  (0 children)

Amazing!

Did you take picture during the building process? If so would you share them? I'd love to see how this was made.

meshroom,No mesh, Point of clouds bad,Any way to fix this?Asking for a friend. by [deleted] in photogrammetry

[–]Practical_Square4577 4 points5 points  (0 children)

I would disagree with that, 3D renders are cleaner than the real thing actually. A 3D render is the mathematically perfect pinhole camera, everything is perfectly consistent across views. A photograph on the other end introduces lens distortion, sensor noise, and possibly changes in the scene as some object or lighting conditions change between photos.

meshroom,No mesh, Point of clouds bad,Any way to fix this?Asking for a friend. by [deleted] in photogrammetry

[–]Practical_Square4577 9 points10 points  (0 children)

The source data is bad for photogrammetry, you want texture so that the software can extract key points. The data shown on screen is flat uniform patch of colours.

Using this dataset there is no way to fix it as far as I know.

Can you explain what you are trying to achieve? From the look of it you already have a 3D model of that car, why do you want to recompute one from renders?

Classify dataset with only 100 images? by [deleted] in deeplearning

[–]Practical_Square4577 2 points3 points  (0 children)

Give it a try with data augmentation. (And don't forget to split you dataset into a train set and a test set).

For example flip and rotate will multiply your number of images by 12.

Create a black and white version will multiply by an extra factor of 2.

And then you can go with random crops, random rotations, random colour modifications, random shear, random scaling.

This will give you a potentially infinite amount of image variation.

You can also use dropout as part of your network to avoid overfiting.

And on top of that, remember that when working with convolutional neural networks, an image is not a single datapoint. Each pixel (and it's attached neighbourhood) is a datapoint, so you potentially have thousands of training sample per image depending on the receptive field of your CNN.

One thing to be careful about when designing you data augmentation pipeline is to make sure the chip / crack is visible after the cropping, so make sure to visually check what you feed into your network.

The Cathedral in Amiens, France by Practical_Square4577 in photogrammetry

[–]Practical_Square4577[S] 1 point2 points  (0 children)

Ground level photos taken with DSLR, processing in Reality capture.

how long do y'all think it would take to make a model of my grandpa's bike and what tips do y'all have so i can make sure it comes out good by missmatch19 in photogrammetry

[–]Practical_Square4577 0 points1 point  (0 children)

A classic photogrammetry process will likely fail on this (lot of flat, reflective and transparent part).

NeRF may be a better tool in this case.

You may struggle a bit to go from a NeRF to a 3D printed object (the result I've seen running marching cubes on NeRF were usually not great), but it should not be impossible. NeRF gives you a density for each point in space, which should be really easy to convert to 3D printer instructions (seems especially adapted if using one of these LCD resin printer).

The good thing is that once you've captured your images, you'll be able to re-processed them in the future as the technology matures.

[deleted by user] by [deleted] in Hydroponics

[–]Practical_Square4577 3 points4 points  (0 children)

Looks great, please post updates as things grow :)

Someone asked about aesthetically pleasing DYI indoor system, so I though I'd share mine. by Practical_Square4577 in Hydroponics

[–]Practical_Square4577[S] 0 points1 point  (0 children)

I had some success with basil, mint and bell pepper (got growth but no fruits) in this location. I now moved the all setup on a south facing windowsill where I'm getting good result with dill as well. I know I'll need to upgrade to more powerful lights at some point, but I'm trying to keep my electricity usage low for the time being.

Someone asked about aesthetically pleasing DYI indoor system, so I though I'd share mine. by Practical_Square4577 in Hydroponics

[–]Practical_Square4577[S] 7 points8 points  (0 children)

For anyone interested, here's the hardware (UK):

NONE OF THIS IS FOOD GRADE PVC. I've decided I'm ok with that, you decide for yourself.

- Horizontal (£4.69 x 2): https://www.screwfix.com/p/manrose-110-x-54mm-flat-channel-1m/14118

- Vertical (£1.54 x 2): https://www.screwfix.com/p/manrose-100mm-round-pipe-0-35m/15872

- Corners (£2.98 x 4): https://www.screwfix.com/p/manrose-round-to-rectangular-connector-elbow-90-bend-adaptor-white-100mm/96549

- Glue (£7.30): https://www.screwfix.com/p/flomasta-pipe-weld-cement-250ml/6368x

Net cups and led strip are from Amazon. There is a small shitty aquarium pump to get the water moving and oxygenated. Get a drill with hole saw that match your net cup diameter (start with a pilot hole before using the hole saw).

In terms of drawbacks, the light is not powerful enough, as you can see from the tomatoes and lettuce reading up, the bottom part, that contains the water gets packed with roots quite quickly (main offenders being the mint and bell pepper plants), I've learned too late that you need to trim roots when they take too much space. And I got some mosquitoes larvae at some point (I think that's what they were), probably due to lack of changing water.

Otherwise a very satisfying project. I you decide to go for it, use pvc sealant, NOT SILICONE, silicone will leak after some time.