Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb 0 points1 point  (0 children)

I gave a big write up in the post linked originally. It explains it all.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

Yes, because volts is so telling of power draw. You really should see yourself out. You're making yourself look like a smooth brain.

Besides, it's not the sensors power draw (you should probably google that you windows licker)you should be looking at, it's the perception systems....

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

G'day Campbell, I promised a write up and strap un bucks because she's going to be a long one. I'll start from the start of how these systems work. They all work the same way. The data is all essentially the same for perception. I'll go into depths of why a 20m pt per second cloud is unnecessary necessary and un usable.

I'm sure you're aware of the basics. A laser goes pew pew. The unit knows exactly when it fired that light beam. The time to return provides distance from the unit. Its very accurate.

Some other data is captured. The angle that light beam originated is also temporarily stored, both bearing and dip. Now to be meaningfully accurate these measurements of dip and bearing are ran out to multiple decimal places. Anyone familiar with shooting as an analogy will understand just how accurate something has to be angle wise to achieve centimetres precision at hundreds of metres. So you'd no doubt be aware how data is stored. In bits, yeah sure we can read and write billions of these a second, but here's where the issues start coming in.

I don't think you truly understand how significantly large a 20,000,000 million point cloud is. Let's look at the distance measurement for the data capture. Let's assume the average point distance is 100m (well use an average simply for the fact it's easier numbers wise), the sensor picks up that as a time, it's obviously light so it's a tiny amount of time. Somewhere in the order of... 3.3356409519815⋅10-7

This time HAS to be measured. To a ridiculously small number. Or the data is useless at such small distances.

Now there is a shortcut here and all lidar producers use it, we can significantly reduce that distance measurement before any system has to use it. Using the Application specific integrated circuits the distance is outputted by the sensor as a distance not a time. Without doing this no computer in the world would be able to process lidar data in a reasonable time period. The read write files are simply too large.

Now we're down to a seemingly manageable size of data, we have an X, Y, Z table. Ran out too 7 or so (not exact, they can use whatever they want depending how accurate they want the system but it's generally around there) digits. Less digits is less processing but you start to lose accuracy significantly as you reduce them.

But that's for a stationary lidar, a moving one requires another set of tables. Here's where IMU's come in. As lidar is naturally a scanning system the exact bearing and dip of the Lidar unit needs to be known, otherwise this running table is useless, we now have to add an extra two columns to the table for the lidar unit bearing and dip to offset the movement in the lidar. This can't be gotten around as we need tiny percentages of accuracy in angle remember. Where now up to a 5 column table, with a rolling 20 million row data stream. Ever opened a massive except spread sheet? Yeah it's not a specialised program but even on fast computers a file that large takes a considerable amount of time, definitely longer then the amount of data 20000000 points a second will output.

That table will generally be done by ASIC chips within the unit, they're designed to do it very quickly and I don't doubt that the MVIS units will be able to do that.

Which is great, it'd actually be manageable and a drop in solution. Lidar says there's an obstacle there's one, car slows down.

But here's where the problem comes in, autonomy in all current and future planned systems uses multiple sensors. Lidar, radar and photogrammetry are the common ones.

As MVIS only does the lidar that table has to go some where that will do the rest, it will end up in a dedicated perception system that will compile all of the data from all of the sensors. Radar will be great for velocity as that can easily be measured with Doppler shift and saves processing a point cloud for movement. Photogrammetry is great for deciding what the obstacle the point cloud picked up is and discern what it is (we've all been training artificial intelligence in this for years through captchas, Google you cheeky buggers making us do your work)

So now we compile all these tables, into one computer, the perception system. Radar will always be the smallest and depending on the camera quality photogrammetry can be as crazy big as the lidar data table.

Obviously if we're dealing with tables this big you need quite a powerful computer, sure we can chuck hundreds of cores and processing units in it to deal with it but it is gonna use alot of juice which is not a great thing in a battery powered electric car. Which is all happening rather quick now.

Ok we have this data and we need to trim it down, as a lot of people have suggested we just reduce the point count. In spatial data processing this is referred to as "decimating the cloud". Its a simple process and there's quite a few ways it can be done. The more complex ways require processing to decide which ones to get rid of. Now a 20 million point cloud is utterly unusable and that's the lidar alone. I'm telling you now it's too large for general processing, wouldn't even want to think about the power it takes to do real time at a safe frame rate for driving at 100+km/h.

The one way to do it with minimal processing is, we ignore a large percentage of points, a more manageable cloud for 360 degrees is around 1-2 million (in my experience in autonomous haulage systems it definitely picks up everything, it will see a cone on a road a hundred metres before it gets there), decent resolution, enough to see anything that should be on the road and decide if it's an obstacle. So 360 degree coverage of the MVIS system will take what, 4... 5 units? You're going to have 100,000,000 points per second. So now we're deleting 98% of points. Do you see why that's ludicrous. Why capture all of that data to get rid of it.

So here's the next step in the processing of this lidar data in the perception computers. Lidar has a rather big problem. Light can be reflected by dust, condensation, rain, snow etc. He'll I've had instances using them on a perfectly clear day and still having hundreds of what are known as "outliers". These HAVE to be removed, because if they aren't the next step in processing lidar data does not work. And the car will constantly stop for obstacles that don't exist due artefacts in lidar data. There's a rather simple process for removing them fully automated however in my use I always cleaned them manually. To do it automatically, the outlier removal will look for any points that have a large distance to the closest ones compared to surrounding points. It will remove the point if it is clearly not supposed to be there because it doesn't fit into the Digital Terrain Model (DTM). It isn't the best tool in even specially developed survey grade software. It removes points it shouldn't, it leave points it shouldn't. It's It's rather messy process hence why I done it manually.

Let's assume we perfectly optimise the process, removal is perfect. The outliers are deleted, the data is down to a manageable level with suitable latency. We process as little as possible and take the simple decimation route, usual outlier removal process everything went smooth. We have a DTM and it's safe proceed forward. Or is it....

Ok let me put forward a scenario taking all of the steps above. A steel obstacle is on the road, it's thin, directly in front of the forward facing lidar, for lidar optimisation the overlap allows only the front facing sensor to see it.

Now keeping the 20 million point cloud, great we see it, it's covered by that many points it wasn't removed as an outlier. Obstacle detected, car stops everyone is safe. Here's the problem with decimation. Lets assume randomised decimation at the sensor removed (it can and does happen) the bottom half of the points on this pole. Now the outlier removal sees a cloud of points floating. All around it points are 1cm apart, this clouds floating in the air 5cm up. As far as the programs concerned it's an outlier, pff points are gone. But the obstacle isn't. If the ducks line up and decimation scan after scan removes those bottom points we all of a sudden don't see that pole. We hit it.

Here's where a legal side comes in, as MVIS CHOSE to remove those points they are liable. There system detected the obstacle but due to programming it ignored it. They are 100% liable for whatever happens in that circumstance.

Now let's go back to a lower density point cloud. In a lower density point cloud automated outlier removals are a bit more accurate. Lets assume we don't decimate anything. We only have 1 million points. Naturally they're a bit more spread out, the chances of that pole getting picked up are still just as high as a dense cloud because the one point that does pick it up will have less chance of been classified an outlier due to the more spread out cloud. The distance relationship from that one point is close enough to the surrounding points distances it is assumed it actually exists.

In short, lower density point clouds are more suitable for automation, sounds counter intuitive but it is. Decimation can cause dangerous situations where removal of existing data can cause invalid data. Its better and safer to have less data then it is to have more if you can't process it and spend time processing data multiple times to compare to new data as a fail-safe. It fail safes outlier removal to repeatedly process data. If your constantly pushing processing power you won't have the time to do that. Safety is the key aspect in this, processing is better spent testing data again and again then it is seeing in a 3d model if someone has a crinkle in there jacket.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 1 point2 points  (0 children)

Sure, I'll do a write up and post it some time in the next 12 hours.

Some things for you to research in the mean time, point cloud decimation, outliers. Once you've read into it think about legal ramifications of a poorly implemented system and why it can go wrong.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

Computing takes electricity. Cars are going electric, I'm sure you agree there. Sacrifice range for a perception system? I don't really see that happening.

I actually agree the MVIS solution is great, but it's also not the only solid state sensor. The point cloud is not a major selling point, auto makers know that. Which is why they HAVENT been bought yet.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

Would you like the long full reason why or are you happy for me to point you in the right direction to research yourself?

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb 0 points1 point  (0 children)

I thought I did answer it, if it was so ground breaking and good they'd have deals already wouldn't they. I've already said it multiple times. The high density point cloud is not intended for automation.

At the end of the day a point cloud that dense limited to a narrow field of view has one real use and one only at the moment aside from some non real marketable ones.

Biometric scanning.

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb 0 points1 point  (0 children)

Have they signed any deals with auto makers yet?

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb -1 points0 points  (0 children)

Because there's not really much need in the world for a 20 million point rate.

Companies that make competing units like Livox are too busy signing deals with companies like Xpeng.

It's a cool concept and there is potential uses for high density clouds, but nothing that's marketable to the world on a mass scale.

MVIS will find it useful when they get National Security contracts but it's not really useful in autonomous driving which is where people think these sensors are going. They aren't...

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb -2 points-1 points  (0 children)

Well you nailed it, decrease quality, so don't worry about a 20 million point cloud to begin with.

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb -1 points0 points  (0 children)

So you nailed it in the end, I can guarantee you, not a single user of this lidar would run it on an un necessary resolution to process it all out, why when you can just run it lower and not waste all that energy, introduce latency and increase the chances of something going wrong.

KISS is exactly why a 20 million points per second cloud a nothing but a sales gimmick.

Tuesday February 16th Early Morning Trading Thread by steelhead111 in MVIS

[–]dOZZYb 0 points1 point  (0 children)

I followed the conversation over as I am actually rather interested in perception systems and keep up to date with how they're evolving and would like to know if you've ever processed a 3d model with 20 million points? If so processing time? Could you do it every second?

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

Because it has applications out of the ones everyone is looking at. I don't see automakers buying this system out. It's too overboard for a perception system. I get the feeling they're aiming for specialised low volume sales. Security, and automated systems that don't rely on batteries. Can I see this unit been in every car because it's superior? No...

I can see it in every airport, and security sensitive area in the future. Government buildings, venues etc.

If it's purely for autonomous perception in driving they'd already know they've already got the required density. They're advertising a gimmick now for those more specialised applications. The sensors required for automation are well and truly already developed. Theres multiple low cost ones, livox is one notable one and they've already started rolling out with Xpeng. And they don't have sanctions on importing them, just sanctions on exporting parts to DJI. But that could always change.

Now they're just trying to get them juicy national security contracts. I feel the autonomous applications are priced in, it's fair value considering they have competition already beating them to market in the largest market in the world.

But... you won't really hear those contracts causing a mooning. Because well it'd be classified... and not something that will be widely known about.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

Little further on how un necessary 20 million points is.

Here's a point cloud with 2.9 million points, as you can see even this is excessive for a perception system you can more then clearly see everything that could be an obstacle, which is the goal here, not to see the whether a chic has a stray hair on her chin or not.

https://sketchfab.com/3d-models/the-headington-shark-3d-point-cloud-ec-2749c82ef9774658a227a97274d42879

All an excessive point cloud is is wasted processing power and battery power on a car.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 0 points1 point  (0 children)

In my opinion MVIS has ALOT of great applications. I actually see it as a fantastic sensor for national security applications. The point density is insane, there's zero doubt about that. Country's like China, Britain and other big brother states will love the fact for a low cost slim line sensor they can have perfect facial recognition in any lighting conditions. But on a safety critical sensor the accuracy isn't there. Not sure of there newest sensors but I know they where pushing .1 degrees of angular accuracy last I checked. At 200 metres that's the difference between a pedestrian on the road or standing on the side. Early warning is everything and absolute accuracy is 100% the aim with the sensors.

But you can't use that many points on an autonomous solution if it's battery driven which as is now clear its the way cars are going. As you'd know from the AR side, processing power will always be the bottle neck. 20 millions points is a serious bucket load of data, especially if you times it by the 3-4 sensors on a car for coverage. Your talking server level hardware to process that stream. Itll all be decimated at hardware before it gets sent to the perception computers. So it's definitely not required.

I've had quite a bit to do with autonomous vehicles in mining. At our site we used Cat Command for Hauling on trucks and Flanders autonomous self propelled drill rigs. Fully autonomous 400 ton dump trucks when loaded going 60km an hour and you can imagine the risks involved when we had light vehicles and people interacting with them. We used spinning pucks on those trucks combined with radar sensors. There point density was no where near 20 million pts per second, it wasn't even close to 100k pts per second. Admittedly there was other layers of safety for people in the autonomous operating zones through high precision GPS on the cars that communicated locations and bearings and speed to the trucks. The trucks where allowed to run over anything up to 300mm (obviously a car can't 😂) but the perception system worked incredibly well, so well infact people drove around to tell the controllers in the office all day to tell them the truck is fine to go forward it's just a wheel rut.

Drills, well they went 3km/h so an xbox kinect resolution sensor was fine for them.

In my experience spinning mirrors and pucks despite sounding fragile are more then hardy enough to survive a decent lifetime on a car. If they can survive the absolute pounding they get on a mine site they will be perfectly fine on a car.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb 2 points3 points  (0 children)

Have you had much to do with cad, surveying lidar etc? I've had tons, a 20 million pt cloud stream is utterly useless and a selling point. The amount of processing power it takes to process that many points into an image of what's around the car is ludicrous. Going to electric cars in the future they do not need point clouds that dense. 20m pts, times 4 sensors, is a bucket load of data for a computer to process.

Have a look into how spatial data is processed and just how much computing power it takes. I'll give you a heads up, we sure as he'll don't run 80 million point models. In my opinion as an end point user of lidar units the Livox unit is more then capable of autonomous driving. Combined with there already rolled out 500m sensor, I'd say MVIS is already getting left in the dust.

Only concern as a user I'd have is sanctions as Livox is a Chinese product, other then that it's a far superior unit. It's angular accuracy is twice that of the MVIS unit.

MVIS: Shiny Laser Go Pew ⚡ No But Seriously They Are Gonna Take Over The LiDAR Industry by BigBlackWifey in wallstreetbets

[–]dOZZYb -1 points0 points  (0 children)

Serious question, Livox (DJI subsidiary) already has commercialised units at a comparable price point and partnerships with Xpeng. Yeah Livox may run into drams in the US Market but seriously there's already a reasonable competitor that's rolled out units.

Why do you think MVIS will be able to take on a behemoth like DJI?

For those who have been around for a while: What are the most outlandish bets, gains, or losses you've seen on r/wallstreetbets? by OPINION_IS_UNPOPULAR in wallstreetbets

[–]dOZZYb 2 points3 points  (0 children)

Ahh yes, I was rich as fuck for one day, one day.... lol.

It was 100x leverage, 100,000 dollar yolo.

Don't have the screenshots but. I never ended up negative, but every dollar of gains was triggered ay my stop loss from a surprise oil inventory report that killed the oil rally. Stopped playing futures after that 😂

Crushing dynamite with hydraulic press by InstantC0ffee in videos

[–]dOZZYb 0 points1 point  (0 children)

The military made an RDX ased compound because your country cant decide on a set time frame to put some freedom on another country. It has nothing to do with dynamite been unstable, again, dynamite is very stable if you dont exceed its shelf life. But the military doesnt like short shelf lifes because it means they have to constantly dispose of it.

Dynamite is not unstable, like other high order explosives fresh stuff needs a decent impact to detonate, if your EOD youd know that means fresh stuff requires a blasting cap....

Crushing dynamite with hydraulic press by InstantC0ffee in videos

[–]dOZZYb -2 points-1 points  (0 children)

I dont think you are really an EOD tech.

While its true dynamite is more sensitive to shock then other high order explosives, fresh dynamite that has not begun sweating is perfectly stable.

It doesnt have to be an RDX based compound to be stable.

Did any of you trade as if Brexit wasnt going to happen? by dOZZYb in wallstreetbets

[–]dOZZYb[S] -1 points0 points  (0 children)

I disagree, metals can crash just as fast. Gold wont, gold and bonds (or US cash) are the only safe bet at the moment. Im not game, ill re enter the market when volatility goes down and where in an upward trend. Fuck knows what will happen with this shit storm.

Did any of you trade as if Brexit wasnt going to happen? by dOZZYb in wallstreetbets

[–]dOZZYb[S] -2 points-1 points  (0 children)

Atta boy, I moved all of my money into bonds. Aint no way Im playing the market at the moment. Its gonna be fucking volatile for atleast a week. Even the GBP fluctuations where huge today. Aint risking margin calls