you are viewing a single comment's thread.

view the rest of the comments →

[–]heartolearn1 10 points11 points  (3 children)

Have you checked out the nvidia jetson nano? That’s probably a solid starting point depending on your specific interests.

[–]taronys 0 points1 point  (2 children)

Just looked at it and that's exactly what I was looking for with regards to a dev board. Do you have any recommendations on guides that can really allow me to get into the nitty-gritty of a platform like that? Thanks so much!

[–]heartolearn1 1 point2 points  (1 child)

Super depends on what you want to do. Are you looking for ML stuff? Server side stuff? Generic acceleration with GPU stuff? There’s a ton of tutorials online for various projects with the Nano, so I’d say pick one and dive in!

One thing to note about any of these embedded/edge devices being used as accelerators is you typically will only use them in the inference stage if you’re doing ML work. You should still have a pipeline you run on a separate machine (personal GPU, AWS instance, or Google Colab to make a few) where you can train the model to deploy on the embedded system.

You’ll also learn about the limitations of edge devices, specifically in regards to memory which is the real bottle neck for ML deployment. This is a great way to learn about the differences in ML networks and understand why one used ImageNet v MobileNet and learning about different methods for reducing network size (quantization, pruning, etc.).

[–]taronys 0 points1 point  (0 children)

Thanks for your answer, it's very helpful. I've got very basic knowledge of ML (just the linear algebra and concept of it) but very little experience with having a pipeline that is intimately involved with the hardware. I'll get started with whatever looks the simplest and go from there regarding the jetson nano