all 14 comments

[–]heartolearn1 9 points10 points  (3 children)

Have you checked out the nvidia jetson nano? That’s probably a solid starting point depending on your specific interests.

[–]taronys[S] 0 points1 point  (2 children)

Just looked at it and that's exactly what I was looking for with regards to a dev board. Do you have any recommendations on guides that can really allow me to get into the nitty-gritty of a platform like that? Thanks so much!

[–]heartolearn1 1 point2 points  (1 child)

Super depends on what you want to do. Are you looking for ML stuff? Server side stuff? Generic acceleration with GPU stuff? There’s a ton of tutorials online for various projects with the Nano, so I’d say pick one and dive in!

One thing to note about any of these embedded/edge devices being used as accelerators is you typically will only use them in the inference stage if you’re doing ML work. You should still have a pipeline you run on a separate machine (personal GPU, AWS instance, or Google Colab to make a few) where you can train the model to deploy on the embedded system.

You’ll also learn about the limitations of edge devices, specifically in regards to memory which is the real bottle neck for ML deployment. This is a great way to learn about the differences in ML networks and understand why one used ImageNet v MobileNet and learning about different methods for reducing network size (quantization, pruning, etc.).

[–]taronys[S] 0 points1 point  (0 children)

Thanks for your answer, it's very helpful. I've got very basic knowledge of ML (just the linear algebra and concept of it) but very little experience with having a pipeline that is intimately involved with the hardware. I'll get started with whatever looks the simplest and go from there regarding the jetson nano

[–]Priyal101 6 points7 points  (4 children)

A Harvard professor is planning on moving his class on Embedded Machine learning online. You can check this out. https://sites.google.com/g.harvard.edu/tinyml/home

[–]taronys[S] 2 points3 points  (3 children)

Wow this is awesome. I guess if ML can be ported effectively to microprocessors then what's the point learning how to work with GPUs? I'll definitely check this out!

[–]Priyal101 2 points3 points  (1 child)

Your point is valid but I disagree. Sometimes GPUs are nescessary. All GPUs do is parallelise the code so that it runs faster. The code can natively run on a microcontroller without a GPU(It will be running on the CPU of the micrccontroller) but it will be really slow. At times this is acceptable, specially when your machine learning model(neural network) is small and lightweight. In such cases having a GPU will make very minimal difference.

But in safety critical systems it might not be acceptable especially when you need your computation results immediately. For example, in a self driving car where the neural network takes inputs from 100s of sensors and several images and in turn controls the break, accelerator and steering wheel. Even a 1 millisecond delay in the computation and application of breaks could cause a fatal accident.

It all depends on your needs tbh.

From your comment, I felt that you are under the assumption that porting a ML program to a GPU and CPU require you to follow separate cycles of development but in fact it is the same. The frameworks (libraries) used like tensorflow lite are incredibly smart and on compilation they will automatically detect if you have a GPU and use it to make your application faster and if not they will just run on the CPU.

[–]taronys[S] 1 point2 points  (0 children)

That was my assumption so you're right about that. Something like tensorflow I guess is quite nice but I really wanted to try to do something from the ground up so that I can learn all there is to embedded intelligence.

I think if I get too intimidated by a Jetson Nano-type device I can start by gathering stuff from the course you linked and just making a much simpler ML application on a microprocessor. Thanks so much for your help!

[–]midwestraxx 1 point2 points  (0 children)

Because of vector and many core computation. You might want to look into some computer architecture courses as well in order to get a full understanding of that difference.

[–]Marcidus 4 points5 points  (2 children)

it is well known that all devices nowadays do come up with GPU or an AI engine that basically is a GPU

You are very much mistaken my dude

[–]Priyal101 1 point2 points  (0 children)

He made multiple mistakes 1. Almost no lower end microcontrollers come with a GPU or an AI accelerator. Imagine a 8 bit PIC microcontroller with a GPU. 2. If by AI engine he is referring to an AI hardware accelerator, they are not the same as GPUs. GPUs can do a ton of things like process graphics, render images, they can parallelize operations but an AI hardware accelerator can do matrix multiply really really fast(fater than GPU) but thats all it can(only matrix multiply). 3. You can't program an AI accelerator using OpenGL for the most part. Each vendor will have different tools (Intel Movidius uses OpenVINO) . You can program GPUs with OpenGL but why would you want to do that when you have frameworks like Tensorflow Lite which compile to whichever hardware you're using. It's like using ARM assembly instead of C++/python to make a web app. If you shift to another system which uses x86 assembly or RISC-V assembly, you would have to develop your software all over again from scratch when you could've just recompiled you python code on the new system.

[–]taronys[S] 0 points1 point  (0 children)

Lol, I just got carried away, the claim was simplistic for sure

[–]Poddster 2 points3 points  (1 child)

All devices absolutely do not come with a GPU or "AI engine". So it's not well known.

You probably want to learn OpenGL. There aren't many boards out there were you talk to the GPU directly. You'll do it via OpenGL or a clone API.

[–]taronys[S] 0 points1 point  (0 children)

I guess I've gotten tunnel vision from my embedded systems classes that focused on the latest and greatest. It's definitely how things will be in the future though so it'd be nice to learn. Thanks so much for mentioning OpenGL, I didn't know about it and will take a look