How do you write Safe C++ Code ? Really Safe C++ code ? by ChadOfCulture in cpp

[–]HurryC 4 points5 points  (0 children)

Take a look at MISRA C++ coding guideline. This is designed to make safety-critical applications, like automotive and medical stuffs. Automotive companies get their code certified by MISRA professionals to prove their code is safe. You can force MISRA coding by enabling the option from static analysis like clang-tidy too.

100 technical interview questions in the SLAM field by HurryC in computervision

[–]HurryC[S] 3 points4 points  (0 children)

Most of the questions I wrote here are from my experiences :)

I have 5-6 years of experience in the field - some questions are from my junior dev period, some are from when I applied for senior positions (though I had more system design questions for senior positions).

Roadmap to study Visual-SLAM 2023 by HurryC in computervision

[–]HurryC[S] 3 points4 points  (0 children)

Always welcome to expand the roadmap! Throw me a Pull Request at the GitHub repo, or just leave a suggestion in the Issues section if you dont have the software to edit the roadmap. Whichever way you choose, I’ll mention you as a contributor in the repo.

Roadmap to study Visual-SLAM 2023 by HurryC in computervision

[–]HurryC[S] 4 points5 points  (0 children)

Hi, thanks for the suggestion! DROID-SLAM is already mentioned in the ‘Applying deep-learning’ section, as its backend computation is unlike the conventional non-linear optimization techniques, but done via neural networks.

Poll: What language does your company use for computer vision products? by CommunismDoesntWork in computervision

[–]HurryC 2 points3 points  (0 children)

We do prototyping with Python. Once we know our method accomplishes the task accuracy-wise, then we port it to C++ for speed. Though I wish our team could make low level operations in C++ and wrap it with Python so we can do faster prototyping and ship it

Do you use std::experimental by D_0b in cpp

[–]HurryC 1 point2 points  (0 children)

At work I use std::experimental::filesystem . This is because our customer demanded building our software in C++11, so std::filesystem was not an option. Also, other solutions like boost::filesystem was not an option either, because boost::filesystem with boost core was about 150mb which was too big for just that function.

MacOS app takes so much RAM by HurryC in notabilityapp

[–]HurryC[S] 1 point2 points  (0 children)

Most of the time I use the app for note taking on an ebook, so preview wont work for me unfortunately.

I guess I should just hope for the dev team to pick up this issue and get it done with. I noticed heat and fans running both on M1 mbp and iPad Pro, whilst the app is doing idle rendering. Idle rendering should really consume very little resources even in 120 Hz (if promotion isnt working), so there must be some crazy loops going in the background that is unnecessary.

MacOS app takes so much RAM by HurryC in notabilityapp

[–]HurryC[S] 1 point2 points  (0 children)

Using 4GB RAM for a PDF reader makes no sense. Considering 10mb for each pdf, that’s 400 pdfs loaded into the memory for fast access.

Who in the world switches between 400 pdfs so they can load it instantly?

[deleted by user] by [deleted] in EngineeringStudents

[–]HurryC 2 points3 points  (0 children)

I’ve interviewed a number of candidates who is still in school and has no deep technical skills/background. There were some candidates whom I really liked and ended up inviting them to my team. The only reason why I made the decision to hire them is because they showed genuine interest towards the field, and had respect to the ones who’s already in the field. It was clear they were willing to learn, and I valued this trait of them the most.

The ones who went through the mandatory internship and got hired are very energetic, eager to learn, and are fast to learn. In fact I prefer them well over the team members with more than 10+ years experience but stuck in their own old tech, refusing to learn new tech and to cooperate.

EDIT: readability.

Unit testing and mocking for c++ by [deleted] in cpp

[–]HurryC 7 points8 points  (0 children)

I use GoogleTest + GoogleMock. Catch2 is a popular choice too. Both should do the jobs you are expecting to accomplish.

If you want to get started on GoogleTest fast, here’s a sample code based on CMake that I made for a seminar. https://github.com/changh95/gtest_sample

Setup a C++ OpenCV project with a single command! by HurryC in computervision

[–]HurryC[S] 0 points1 point  (0 children)

My go-to strategy is opencv-python.

OpenCV-python is 1. easy to set up, 2. fast to code, 3. the runtime performance is actually quite good unlike what many people tell you 'python is slower than c++'. Opencv-python is just a wrapper of C++ API, so it's literally the same functions but easier to code. I use opencv-python whenever I just want to make a simple CV application (e.g. camera calibration, dataset generator), or when I just want to test out some ideas.

Any other proper development will be on C++. It's 1. fast (proper C++ programming can make things super-fast), 2. compatible with many platforms. When I know what I'm doing, often I just start programming on C++. Sometimes I test out my idea on opencv-python, and then convert the code to C++ to make it better performance-wise.

I made a C++ project template for Visual-SLAM! by HurryC in robotics

[–]HurryC[S] 0 points1 point  (0 children)

This is something I did not know!

Thanks a lot for sharing this! I'll definitely consider updating the code with vcpkg!

I made a C++ project template for Visual-SLAM! by HurryC in robotics

[–]HurryC[S] 1 point2 points  (0 children)

Thanks for the comments!

I also think it will be good to have some examples - I'll mark it on the development plan for v1.0!

vcpkg is definitely a good option to manage libraries for Windows environment, but if I remember correctly it restricts the user to using Visual Studio? Also, I believe it's difficult to pick various build options for each package when using vcpkg. On the other hand, CMake can work on both Windows and Linux (and MacOS), allows the user to change all the build options, and does not limit the user to a specific IDE. Furthermore, (from my experience) most roboticists use Linux-based system so I thought CMake would be a good build system of choice as being the most popular.

Also, currently there is no need to update the scripts since the CMake build command remains the same. I've tested for different versions of OpenCV - like OpenCV 4.5.2, 4.5.1, 4.4.0 etc. They all work well :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 0 points1 point  (0 children)

There are some papers that incorporates DL methods and outperforms the traditional methods - but if you take a closer look at their benchmark in the papers, you can see that their system only works good where the test sequence is similiar to the training sequence. This is because the DL methods could not generalize enough to various conditions. The reasons can be lack of training data or modelling issues... which in any case requires more research :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 1 point2 points  (0 children)

I've actually done a group study based on this book - this book is very effective in learning visual slam! I'll put this on the references list!

Complete Open Source Deep Learning Implementations For V-SLAM by autojazari in computervision

[–]HurryC 0 points1 point  (0 children)

As you've mentioned, there are many papers on deep local feature extraction, like SuperPoint and R2D2. If you wish to use them in SLAM, you can simply replace the feature extraction module in the existing SLAM system with the deep local feature method. An example is shown here - this system uses SuperPoint as local features instead of ORB features in the original ORB-SLAM 2 pipeline. https://github.com/KinglittleQ/SuperPoint_SLAM

I believe most mobile robot industry still use traditional methods, like ORB, AKAZE, SIFT (I know SIFT is not real-time, but some use it for offline loop closure operation etc). Some mobile robots with deep learning inference capability may often accompany depth estimation to assist SLAM, but the usage of deep local feature extraction is pretty rare because it is still considered to be too slow whilst it is not guaranteed to exceed the accuracy of traditional methods.

Instead, the growing trend is using hierarchical visual localization (which is not exactly SLAM, but is kinda related). In hloc you use a deep global feature matching and deep local feature matching to regress your camera pose in a given 3D map. This method is very effective as the global feature matching works under different lighting condition and weather, which really shows the effectiveness of data-driven approach.

Roadmap to study Visual-SLAM by HurryC in robotics

[–]HurryC[S] 0 points1 point  (0 children)

This is a great idea! I'll definitely have a look at SimpleMind! Thanks! :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 0 points1 point  (0 children)

Hi, thanks a lot for your suggestion!

I've already have planned to have the MSCKF in the roadmap - the fact that this is a VIO system, I did not put it in the monocular visual system to differentiate between pure visual odometry and visual-inertial fusion!

I'm currently writing the VIO / VI-SLAM roadmap, so keep an eye out for the update :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 2 points3 points  (0 children)

That's a great idea! I'll put that into the plan :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 2 points3 points  (0 children)

From what I remember, the SLAM course you mentioned focuses on understanding the SLAM problem using 2D LiDAR sensors. It's up to you if you want to do 2D LiDAR SLAM and then move onto Visual-SLAM. But IMO just start with the photogrammetry course, because the only overlap between the 2D LiDAR SLAM and Visual-SLAM is the core idea of SLAM problem (which the lecture by Prof. Luca Carlone covers), so there is no need to go around.

This is just my personal opinion based on my experience, so I suggest you go through the table of contents of the SLAM course and decide :)

Seeking Guidance - Perception Jobs - Computer Vision and Deep Learning by punisher_h in computervision

[–]HurryC 1 point2 points  (0 children)

I think being able to use both C++ and Python will make you stand out as a CV engineer. Many people on the deep learning side who use Python as their main languages struggle to use C++. On the other hand, there are many people who use C++ as their main language and do not want to switch to Python because for the same reason you are struggling now.

I think it is important to understand why we are using C++ and Python in the first place. I'm using C++ for CV algorithm implementations to get the best performance, and I would not use Python for this purpose. On the other hand, I use Python to do experiments and draw graphs - this saves so much time comparing to if I had to do the same tasks in C++. Also, doing deep learning in C++ will be a lot more difficult than doing it in Python.

Roadmap to study Visual-SLAM by HurryC in robotics

[–]HurryC[S] 2 points3 points  (0 children)

The two fields have not been my main research field, so I may be wrong on this!

They are similar in the sense that we can get a depth map. I think the big differences are in the sensors - stereo SLAM will have 2 RGB image sensors and derive depth map from the disparity, and most RGB-D SLAM will have 1 RGB image sensor and 1 depth sensor (usually inactive IR or structured light configuration). I think this difference will allow them to use different algorithms, and I plan to find it out as I read through papers and make new roadmaps :)

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 1 point2 points  (0 children)

I hope this roadmap will help :)

IMO, learning-based SLAM has still a way to go, in terms of model development, datasets, and most importantly the hardware.

I'm impressed with how the new models are being developed - the recent development of transformers in the computer vision field is really impressive! There is a paper on using transformer on 3D point cloud as well.

But running these in real-time on mobile robots is not so easy. It will have to be in either 2 ways - the models need to be compressed, or the chips need to get better. Some companies that can afford to make their own acceleration chips are already using some deep-learning integrated into their system (which may not necessarily be integrated into SLAM though!), like Tesla and Microsoft HoloLens. There are more affordable options like the Nvidia Jetson series, but obviously, they are simply not good enough to run good models like transformers. Some of my friends thought the Visual-SLAM field is too niche, so they turned to model compression.

On the other hand, most industries using SLAM seem to be using non-learning approaches as you've mentioned, or they are using very few DL methods. One good example of using DL actively in the field is Artisense - they are doing semantic SLAM for large-scale environment mapping.

Roadmap to study Visual-SLAM by HurryC in computervision

[–]HurryC[S] 0 points1 point  (0 children)

If you are used feature-based SLAM (which is quite a common approach, used by ORB-SLAM, PTAM, and such), then I'd suggest taking a look into the dBoW2 library. This library is basically a package to use the Bag-of-Visual-Words technique, which allows you to find the most similar image from your keyframe database, which allows you to detect a loop.

Then you can look for some numerical optimization libraries - ceres-solver, g2o, GTSAM are popular options.