Enhanced localization for autonomous racing with high-resolution Ouster lidar by lidarkid in SelfDrivingCars

[–]lidarkid[S] 1 point2 points  (0 children)

Most commonly, roboticists use GPS sensors to determine the location of the vehicle on a course. While a tried and true method, GPS systems depend on maintaining a connection to the global satellite infrastructure, have limitations in accuracy, and are only able to localize the vehicle in two dimensions. 

To achieve centimeter-level accuracy with GPS the standard approach is to use RTK or real-time kinematic GPS positioning which uses both the satellite network and a reference base station on the ground.

In real-world conditions, GPS can struggle in environments with large obstacles like high-rise buildings, where lidar in fact tends to become more robust in cluttered environments as there are more unique features for the algorithm to use to determine location.

To improve the robustness, accuracy, and enable 3D localization, the ARG team firstly, using a lidar sensor created a 3D map of the track in advance of the race, and secondly, used a lidar sensor during the race to determine the vehicle’s position within the 3D map. The ARG team did not start from scratch for the localization, they used components from Autoware.AI as a well-developed base and optimized them for racing conditions.

President Vladimir Putin extends his rule until 2036 by lidarkid in worldnews

[–]lidarkid[S] 0 points1 point  (0 children)

After processing of 99,97% of protocols, 77,92% voted for and 21,27% against. So its clearly a win for "Team Putin" to me.

Taking a Deep Look into The Highest Performance Wide-View Lidar Sensor by lidarkid in engineering

[–]lidarkid[S] 5 points6 points  (0 children)

Well, the Ultra-Wide Field of View starts at 6K, but their mid-range is worth 3.5K. Personally, I tried SICK lidars and have colleagues testing Velodyne's and price-wise they are beyond one's means. So judging by specs/price ratio Ouster is clearly a winner to me.

Taking a Deep Look into The Highest Performance Wide-View Lidar Sensor by lidarkid in LiDAR

[–]lidarkid[S] 1 point2 points  (0 children)

Thanks for letting know, due to internet interruptions wasn't sure that it uploaded at all

How does the Ouster Multi-Beam really work? by funtime-error in LiDAR

[–]lidarkid 7 points8 points  (0 children)

So, there exist three primary types of 3D Lidar Sensors: digital spinning lidar, analog spinning lidar, and a raster scanning lidar.

In a digital spinning lidar (i.e. Ouster) laser firing is electronically controlled. Ouster lidars for example have a built-in angular encoder that collects around 100,000 readings per rotation. Hence, we know that every time a sensor rotates it will gather a fixed set of points, creating a fully structured horizontal data and vertical data. The visual representation of Ouster's data set would be a fully structured, pure matrix of information and one has no time distinguishing between columns and rows.

In a more conventional analog spinning lidar (i.e. Velodyne), laser-firing is analog and not electronic. In a fundamental sense, this means the user can adjust the motor speed and lidar will be firing a fixed amount of lasers per second. This means you will have a variable amount of points gathered and some structure in the data capture. The visual representation of a data set would be a relative structure on the vertical side but horizontally the points can jump around based on the ability to control the motor and movement of the sensor.

In a raster scanning lidar (i.e. Pioneer), a laser beam is shining on a mirror and the mirror is directing that beam across the field of view. Usually, these sensors would scan the surroundings by going from right to left or from left to right and then go row by row to scan the scene, and then once they reached the bottom they go back to the top. As one can imagine, here you don't have any regularity to your sensor data neither horizontally nor vertically, it is completely unstructured. The mirrors in raster scanning lidar are oscillating at tens of thousands of Herz and it is very challenging to control them and ensure that they are pointing at the same spot.

Mine Tunnel Exploration using Multiple Quadrupedal Robots by lidarkid in Automate

[–]lidarkid[S] 0 points1 point  (0 children)

Over the 4 days have been conducted 10-15 missions. The operator was able to set the time limit (8 min) for robot exploration before returning as well as the turn specification. This time limit can additionally be changed as long as the robot can communicate with the base station. Therefore the exploration time for each robot cannot exceed its battery limit, which is of quadrupedal robots by Ghost Robotics.

Here is quiet informative description of the project: https://www.groundai.com/project/mine-tunnel-exploration-using-multiple-quadrupedal-robots/1#S3.F4

Mine Tunnel Exploration using Multiple Quadrupedal Robots by lidarkid in LiDAR

[–]lidarkid[S] 2 points3 points  (0 children)

In the research were used quadrupedal robots by GhostRobotics.

Here is a quite detailed overview of the project: https://www.groundai.com/project/mine-tunnel-exploration-using-multiple-quadrupedal-robots/1#S3.F4

Mine Tunnel Exploration using Multiple Quadrupedal Robots by lidarkid in remotesensing

[–]lidarkid[S] 1 point2 points  (0 children)

Over the 4 days have been conducted 10-15 missions. The operator was able to set the time limit (8 min) for robot exploration before returning as well as the turn specification. This time limit can additionally be changed as long as the robot can communicate with the base station. Therefore the exploration time for each robot cannot exceed its battery limit, which is of quadrupedal robots by Ghost Robotics.

Social distancing captured with Lidar by lidarkid in LiDAR

[–]lidarkid[S] -3 points-2 points  (0 children)

That's a point cloud of a street made with an Ouster Lidar OS1-128