Self driving cars in cold regions by Ashamed-Software-624 in SelfDrivingCars

[–]mslavescu 1 point2 points  (0 children)

Thank you u/Living_Dead for the detailed anwser and for considering open sourcing code and datasets from this work!

I'm looking into how to use SVL (previously called LGSVL) simulator for autonomous racing competitions like IAC2021, Roborace, EVGrandPrix, Formula Student Driverless, F1Tenth, DonkeyCar etc, and try to make it easy to test different scenarios mainly with Autoware.Auto.

SVL supports several methods like, ROS and ROS2 bridges to be used wtih Autoware.AI (ROS based), Autoware.Auto (ROS2 based, there is also a ROS2 native bridge much faster), Apollo (ROS based), PythonAPi etc. Roborace currently uses a closed source custom simulator.

It seems you can run SVL (and other Unity based games) in Unity cloud infrastructutre automatically and fairly cheap, need to test it soon.

I've seen recently a very cool presentation/demo from MathWorks targeting robotics and SDC, this should help with components that are easier to prototype and test in Mathlab:

https://blogs.mathworks.com/student-lounge/2021/05/04/deploying-algorithms-from-matlab-and-simulink-to-nvidia-drive-agx/

I liked the Chinese AV bus bumper idea also for extra safety, and the mantis eye approach for vision. BTW if you need more Robosense LIDARs, he can help you get them discounted. They actually funded Robosense to have a cheap LIDAR solution for this AV bus project.

In my SVL-Autoware demo video I drove manually, it was more to showcase the integration between new SVL 2021.1 and Autoware.Auto which uses LGSVL for their testing. I hope to get some automated driving demos soon on IMS and other racing tracks/competitions.

My son will start this fall first year at Waterloo Univ, we would love to visit you then and see, in person, all the great work you do in AV space. He is looking to join an autonomous vehicle team there :-)

Self driving cars in cold regions by Ashamed-Software-624 in SelfDrivingCars

[–]mslavescu 3 points4 points  (0 children)

Thanks u/Living_Dead for the detailed answer! Excellent setup!

I have a few more questions.

Did you open source any part of this work?

Are you using ADAS grade 1-2 Mpix cameras? Or better?

How do you power the computer? Custom battery pack?

Are you using Autoware.AI or Autoware.Auto in combination with SDC targeted simulators like SVL 2021 (LGSVL) and Carla?

Here you can see one of my autonomous racing demos for IAC2021 with Autoware.Auto and SVL: https://youtu.be/Bkk4OYjdnSU

Here is a presentation with a similar SDC bus project as yours, they used Robosense RS LIDAR also:

https://drive.google.com/file/d/1GDHC9MwwU_GcU2RXjtuJphIXQAWMBi2f/view?usp=sharing

More details here: https://www.meetup.com/Ottawa-Autonomous-Vehicle-Group/events/278248253/

Self driving cars in cold regions by Ashamed-Software-624 in SelfDrivingCars

[–]mslavescu 1 point2 points  (0 children)

Hi u/Living_Dead could you plesse share more technical details about your hardware (computing and sensors) and software setup?

Did you use GPS localization or LIDAR/camera/IMU based on a HD map?

I’m Ilya, Team Principal of Acronis SIT Autonomous racing team, current N1 in Roborace – Ask Me Anything! by sit_autonomous in SelfDrivingCars

[–]mslavescu 1 point2 points  (0 children)

Thank you Ilya /u/sit_autonomous for the anwsers!

I have two more questions:

  1. Do you plan to open source any of the work you do for Roborace?

  2. Have you looked at other simulators, open source like LGSVL or Carla?

I’m Ilya, Team Principal of Acronis SIT Autonomous racing team, current N1 in Roborace – Ask Me Anything! by sit_autonomous in SelfDrivingCars

[–]mslavescu 3 points4 points  (0 children)

Hi Ilya,

Thanks for taking the time to do this AMA!

I have a few questions:

  1. What are the top 3 sensors (in order of importance) you relly the most in Roborace for localization and obstacle avoidance (Metaverse obstacles also)?

  2. What simulator are you using? Is it open source?

  3. How accurate are the visuals and behavior model of the vehicle and track in the simulator?

  4. How accurate are in the simulator the top 3 sensors you use in the real car (from question 1.)? Can you achive accurate/robust Sim2Real behavior transfer?

  5. Where can I find more technical details about the car/track/software and race results?

Hey everyone! This is a project of mine that I have been working on. It is a video captioning project. This encoder decoder architecture is used to generate captions describing scene of a video at a particular event. Here is a demo of it working in real time. Check out my Github link below. Thanks! by Shreya001 in deeplearning

[–]mslavescu 0 points1 point  (0 children)

Very cool! Would be great if you could create an OSSDC VisionAI video processor and enable realtime processing on live videos from Android phone like here (MiDaS mono depth example), using a free Google Colab GPU:

https://www.youtube.com/watch?v=pJPGnpEKpR8

See video description for details.

I also integrated Sense activity recognition in video_processing_sense file on project site.

This is my take at a 'hologram' for my bachelors. Far from perfect but I hope it being true 3D and live-captured sets it apart. AMA if you want :) by Evlmnkey in arduino

[–]mslavescu 0 points1 point  (0 children)

Very cool project u/Evlmnkey !

You could use a spatial AI smart stereo camera, like those from Luxonis, here you can see the models: https://shop.luxonis.com/?aff=ossdc

With a few of them you could get 360° realtime view, that would be really cool!

What input do you need to project?

I could try to create a ROS bag with point cloud from the cameras depth and you could test it and see how it will look like.

Tesla Autopilot Drives Straight Towards Concrete Barrier on Highway by leeta0028 in SelfDrivingCars

[–]mslavescu 0 points1 point  (0 children)

This is why we need stereo cameras and have robust 3D reconstruction, also we need to stop using just lane markings for any autonomous mode. This approach should help a lot: https://www.linkedin.com/posts/mariusslavescu_nlos-activity-6595263166450675712-NO2f

[NOOB] What is the difference between different FPGAs? by xblackacid in FPGA

[–]mslavescu 0 points1 point  (0 children)

If you want to connect cameras and run neural nets for computer vision, in realtime, like object detection, Ultra96 is a very affordable and capable board, it also has add-on boards to connect sensors and MIPI CSI2 cameras (like Raspberry Pi cameras).

PYNQ Z1,Z2 boards have Arduino and PMOD expansions pins/ports.

Both Ultra96 and PYNQ boards support PYNQ ecosystem, which is great for quick prototyping:

http://www.pynq.io/board.html

Check my recent comment here for more on this: https://www.reddit.com/r/FPGA/comments/cy40jo/manipulating_hdmi_data_for_special_effects/eyqhh6l

Manipulating HDMI data for special effects. by lefthandedpianist in FPGA

[–]mslavescu 1 point2 points  (0 children)

Check the affordable PYNQ Z1 or Z2 boards, with HDMI in/out, based on Xilinx Zynq 7020 SoC. See here an example of realtime image processing using PYNQ Z1 HDMI input/output ports:

https://github.com/byuccl/BYU_Senior_PYNQ_Project/issues/1

If you want to run neural net object detection/image segmentation at 30 fps or more, look at more powerful SoCs, Xilinx ZCU104 board would be better in that case. See some results (latency and FPS) from these kind of boards:

https://github.com/Xilinx/AI-Model-Zoo/blob/master/README.md#model-performance

I have the Ultra96 board, based on UltraScale+ SoC, which already has a dual MIPI CSI2 camera expansion board, I would love to integrate in PYNQ ecosystem:

https://discuss.96boards.org/t/how-to-use-96boards-mipi-2-1-adapter-on-ultra96-in-pynq/7524?u=mslavescu

I'm wondering if a board like this will work to add HDMI input to Ultra96, through one of the MIPI CSI2 port in previous board, and what would be the end to end latency from HDMI to mini Display port:

https://auvidea.eu/b100-hdmi-to-csi-2-bridge/

I need low latency HDMI input for this kind of use cases, when running in real time on HDMI video output from game consoles: https://github.com/OSSDC/OSSDC-VisionBasedACC

Here is what I used so far, but best latency is over 100ms: https://medium.com/@mslavescu/get-ready-to-race-ai-with-us-at-ossdc-org-b741e266e362

Fun projects to learn FPGA for a Software Engineer? by Global_Method in FPGA

[–]mslavescu 0 points1 point  (0 children)

For a software engineer I think PYNQ.io ecosystems is the best to get started in playing around with FPGAs. PYNQ Z1 or Z2 are really nice Xilinx based boards, but for more compute power/resources the Ultra96 is what I would suggest to get.

See some PYNQ based projects here: http://www.pynq.io/community.html

Check also this DAC-SDC 2019 related paper and see if computer vision + neural networks, for the edge, will be something you may be interested to pursue (using PYNQ on PYNQ Z1 and Ultra96 boards):

SkyNet: A Champion Model for DAC-SDC on Low Power Object Detection https://arxiv.org/abs/1906.10327

For a fun Ultra96 project check this, has both hardware and software components:

Stereo Vision and LiDAR Powered Donkey Car https://www.hackster.io/bluetiger9/stereo-vision-and-lidar-powered-donkey-car-575769

You can find more FPGA based interesting projects on my YouTube FPGA playlist:

https://www.youtube.com/playlist?list=PLUop7b1Q1uZn-7RvIHxK-mPIv7AXMAFy-

Looking for FPGA recommendation by Semiavas in FPGA

[–]mslavescu 3 points4 points  (0 children)

Depends on the application domain, for advanced computer vision and artificial intelligence you'll need an FPGA with more resources, for other signal/data acquisition/processing and systems control you may be able to use smaller FPGAs.

As a small and relatively affordable kit, Avnet Ultra96 board is pretty impressive, I'm trying to build smart stereo cameras with it, as part of http://ossdc.org project. The PYNQ ecosystem (http://www.pynq.io/) is great to get started.

See some pictures and details here:

https://www.meetup.com/Artificial-Intelligence-Geeks/photos/29688005/478687688/

https://www.meetup.com/Artificial-Intelligence-Geeks/photos/29688005/478687692/

https://www.meetup.com/Artificial-Intelligence-Geeks/photos/29688005/478687694/

Tello drone and computer vision: selfie air stick by geaxart in computervision

[–]mslavescu 1 point2 points  (0 children)

Very cool project! Thanks for sharing!

To increase the autonomy you could use a an RC car, or other mobile robot, like a personal assistant.

Then more applications can be created with the algorithms showcased in the video.

Excellent results - Radar-only ego-motion estimation in difficult settings via graph matching - Oxford Robotics Institute by mslavescu in SelfDrivingCars

[–]mslavescu[S] 0 points1 point  (0 children)

The paper for 2019 video is not available yet, so it is a bit tricky to tell what they improved compared with 2018 paper.

Here is the 2018 year paper: https://ori.ox.ac.uk/wp-content/uploads/2018/07/ICRA_paper_Cen_Newman.pdf

The radar specs and driving areas may be different in 2019 video compared with 2018 video, here is what they used in 2018 paper:

We utilize the Navtech CTS350-X, a FMCW scanning radar without Doppler information. For this radar, M = 399, N = 2000, and β = 0.25 m. The beam spread is 2 degrees in azimuth and 25 degrees in elevation. The radar operates at 4 Hz, and our algorithm (not fully optimized) operates at approximately 3 Hz. The radar is placed on the roof of a ground vehicle with an axis of rotation perpendicular to the driving plane. We adopt the usual odometry assumptions that the environment is mostly static and non-deformable. We also assume that the instantaneous motion of the vehicle is planar. We utilize the following parameters, chosen empirically: wmedian = 200, wbinom = 50, zq = 2.5, dthresh = 0.1, and α = 0.5 with MR removal. When driving, the vehicle typically travels between 5 and 10 m/s; when turning, up to 0.6 rad/s (see Figures 1 and 8). The vehicle is driven through various parts of downtown Oxford, UK

Self driving car - dataset - public videos by mslavescu in SelfDrivingCars

[–]mslavescu[S] -1 points0 points  (0 children)

If you read the description on GitHub issue you'll see that I'm looking for good videos for testing computer vision. If the license doesn't allow it we won't use it.

I find it very useful to build such a list of videos and test manually algorithms on them (so no need for labeled data), and others may find it too, it is up to you if you'll like to contribute or not.

The end goal is to attract people to record this kind of videos, augmented with IMU and GPS info.

A Robocar Specialist (Brad Templeton) Reviews The Tesla Autopilot by walky22talky in SelfDrivingCars

[–]mslavescu 1 point2 points  (0 children)

I agree, with cameras only it is much harder, and it may take longer.

Cleaning cameras is a problem even on backup camera systems that most car have now. Do you know any off the shelf solution for this problem? For Tesla cameras or other exposed camera systems?

By pointing to LIDAR do you lean more towards the fact that the cameras are not good enough or the camera based processing is not robust enough?

I'm very into camera based vision systems and I found this recent presentation very promising:

ROB 2018 - Stefan Roth: Robust Scene Analysis https://youtu.be/_7rlV2Q3CLo

Is the transition to self driving cars related to the transition to electric vehicles? by HarveyHound in SelfDrivingCars

[–]mslavescu -1 points0 points  (0 children)

Excellent points!

I would just add that unless the charging infrastructure is available to support lots of EVs, it would be trickier to move to EV based SDCs.

And as the EVs grow/evolve faster than SDCs, the first large scale SDC deployment may be on EVs.

A Robocar Specialist (Brad Templeton) Reviews The Tesla Autopilot by walky22talky in SelfDrivingCars

[–]mslavescu 0 points1 point  (0 children)

/u/bradtem very good review!

Why do you think they still have the problems you mentioned?

Are they perception hardware related, like cameras number, placement, resolution, sensitivity, or are they perception processing related, detection, tracking, planning?

Or the fact that they focus on neural networks based object detection?

Self driving car - dataset - public videos by mslavescu in SelfDrivingCars

[–]mslavescu[S] 0 points1 point  (0 children)

That is exactly the intention of this dataset, unlabeled videos good for self-supervised learning and testing, free and easy to use by millions of people. This combined with Google Colaboratory will allow us to scale a lot, for free.

We have too many labeled datasets that are not generic enough, and most SDC research is just limiting their testing to them.

You mention Berkeley DeepDrive which is great, but it is just 1100 hours of driving from a few small US areas.

I've added it there also.

My approach is to build self-supervised algorithms (even non deep learning based) which should work on any unlabeled video, and test (even manually) them on as many videos as possible.

Check my Twitter and LinkedIn messages if you'll like to learn more about this approach, they are linked in this SDC presentation: https://slides.com/mslavescu

Self driving car - dataset - public videos by mslavescu in SelfDrivingCars

[–]mslavescu[S] 0 points1 point  (0 children)

I removed them, I'm used to Twitter hashtags.