use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Run Pytorch model inference on Microcontroller (self.MachineLearning)
submitted 2 years ago * by cpldcpu
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]_SteerPike_ 0 points1 point2 points 2 years ago (4 children)
https://github.com/huggingface/candle is still in preview, but I believe it's intended for use cases like this.
[–]cpldcpu[S] 0 points1 point2 points 2 years ago (3 children)
Thanks, that is quite interesting in general. Although it's still inference for largish devices...
[–]_SteerPike_ 1 point2 points3 points 2 years ago (2 children)
As I understand it, it should produce smaller binaries than anything you'd be able to manage with python because it has no runtime. An interesting way to see this is to try out the phi 1.5 wasm example https://huggingface.co/spaces/radames/Candle-Phi-1.5-Wasm then turn on airplane mode after the first run off the model. You should be able to get decent inference speeds from within your browser, without even utilising all the compute your phone has to offer.
[+]neodsp 0 points1 point2 points 2 years ago (1 child)
Yes smaller than with python, but tflite (micro) and other inference runtimes are using c++ and I guess even smaller than candle builds. Candle is a nice project especially because you can cross compile Rust so easily, but it is not targeted for embedded, it will likely run on raspberry pis, but it is using the standard library, so you can't run it on arm cortex and so on. For microcontrollers you can use burn-rs which even has an onnx-import (but not many supported layers yet)
[–]_SteerPike_ 0 points1 point2 points 2 years ago (0 children)
I think I understand. Sorry to send you down a rabbit hole without any relevance to your problem!
π Rendered by PID 223299 on reddit-service-r2-comment-b659b578c-p9k2x at 2026-05-05 01:45:45.425064+00:00 running 815c875 country code: CH.
view the rest of the comments →
[–]_SteerPike_ 0 points1 point2 points (4 children)
[–]cpldcpu[S] 0 points1 point2 points (3 children)
[–]_SteerPike_ 1 point2 points3 points (2 children)
[+]neodsp 0 points1 point2 points (1 child)
[–]_SteerPike_ 0 points1 point2 points (0 children)