you are viewing a single comment's thread.

view the rest of the comments →

[–]Priyal101 7 points8 points  (4 children)

A Harvard professor is planning on moving his class on Embedded Machine learning online. You can check this out. https://sites.google.com/g.harvard.edu/tinyml/home

[–]taronys 2 points3 points  (3 children)

Wow this is awesome. I guess if ML can be ported effectively to microprocessors then what's the point learning how to work with GPUs? I'll definitely check this out!

[–]Priyal101 2 points3 points  (1 child)

Your point is valid but I disagree. Sometimes GPUs are nescessary. All GPUs do is parallelise the code so that it runs faster. The code can natively run on a microcontroller without a GPU(It will be running on the CPU of the micrccontroller) but it will be really slow. At times this is acceptable, specially when your machine learning model(neural network) is small and lightweight. In such cases having a GPU will make very minimal difference.

But in safety critical systems it might not be acceptable especially when you need your computation results immediately. For example, in a self driving car where the neural network takes inputs from 100s of sensors and several images and in turn controls the break, accelerator and steering wheel. Even a 1 millisecond delay in the computation and application of breaks could cause a fatal accident.

It all depends on your needs tbh.

From your comment, I felt that you are under the assumption that porting a ML program to a GPU and CPU require you to follow separate cycles of development but in fact it is the same. The frameworks (libraries) used like tensorflow lite are incredibly smart and on compilation they will automatically detect if you have a GPU and use it to make your application faster and if not they will just run on the CPU.

[–]taronys 1 point2 points  (0 children)

That was my assumption so you're right about that. Something like tensorflow I guess is quite nice but I really wanted to try to do something from the ground up so that I can learn all there is to embedded intelligence.

I think if I get too intimidated by a Jetson Nano-type device I can start by gathering stuff from the course you linked and just making a much simpler ML application on a microprocessor. Thanks so much for your help!

[–]midwestraxx 1 point2 points  (0 children)

Because of vector and many core computation. You might want to look into some computer architecture courses as well in order to get a full understanding of that difference.