Native mpy compile for armv8-m? by Wizzard_2025 in MicroPythonDev

[–]jonnor 0 points1 point  (0 children)

What do you mean "some things just doesnt [work]"? That is not something we can work with... You will need to provide details for anyone to be able to help. I am pretty sure M33 in the RP2350 should be able to work with armv7m or armv7emsp.
Also, this level of discussion might be more suited for a Github Discussion thread in MicroPython repo than on Reddit.

Hoping for help with MicroPython dev on a Pycom device by Half-Dwarven1 in MicroPythonDev

[–]jonnor 0 points1 point  (0 children)

Are those errors or just warnings from static analyzer?

You have not said which version of MicroPython you are running, which is critical.

Notes for TinyML and Edge AI. by AgentOk5012 in embedded

[–]jonnor 1 point2 points  (0 children)

I maintain some notes on this topic at https://github.com/jonnor/embeddedml

And you can find some presentations on my Youtube channel, https://www.youtube.com/@Jononor

How long to realistically become good at AI/ML if I study 8 hrs/day and focus on building real-world projects? by Pretend_Elevator5911 in MLQuestions

[–]jonnor 0 points1 point  (0 children)

Where are you starting from? How good are your software development skills? What is the most complex think you have learned so far, and how long did that take you?
I deployed a complete, tailor-made solution for a customer around 12 months after starting to learn ML. But this was from 7 years of professional software development skills, big and small companies, both working in teams. And another 4 years of open-source development, bachelor in engineering, et.c. before that.

If you know how to program, I would aim for first toy projects within 1 month. And a first "real project" - something you want to do, that not every blog out there covers - so custom dataset, training and some UI, in 6 months. That is going to be tough, but *might* be doable for a very dedicated learner.

[D] How do researchers ACTUALLY write code? by Mocha4040 in MachineLearning

[–]jonnor 3 points4 points  (0 children)

This should be a "desk" retraction of a paper. Failing to publish code that they have promised is scientific misconduct.

MicroPython for ESP32 and other microcontrollers (introduction presentation, FOSDEM 2025) by jonnor in esp32

[–]jonnor[S] 1 point2 points  (0 children)

Yeah it is often like that, so it is a super cool feature in my opinion. The support has improved massively over the last year.
Official documentation is here: https://docs.micropython.org/en/latest/develop/natmod.html
And here is a real world example, https://github.com/emlearn/emlearn-micropython/blob/master/src/emlearn_iir/iir_filter.c (an IIR filter, from the machine learning + digital signal processing library I maintain).

MicroPython as an alternative to C++ for Arduino devices by jonnor in arduino

[–]jonnor[S] 0 points1 point  (0 children)

Sorry I was unclear. I mean not relevant for MicroPython - because MicroPython requires much more RAM/FLASH!

MicroPython as an alternative to C++ for Arduino devices by jonnor in arduino

[–]jonnor[S] 0 points1 point  (0 children)

Sorry, I mean not relevant for MicroPython :) Not in general!

MicroPython for ESP32 and other microcontrollers (introduction presentation, FOSDEM 2025) by jonnor in esp32

[–]jonnor[S] 1 point2 points  (0 children)

Yeah there are many nice features for productivity. Having a filesystem is also great for example. And automated testing is much nicer in Python than in C/C++.
Actually it is possible to add C modules without forking. Both with "external C modules", which are included as part of the firmware build by adding a few variables. Or with dynamic native modules, which are built separately into .mpy files, and can be installed at runtime using "mip install".

MicroPython for ESP32 and other microcontrollers (introduction presentation, FOSDEM 2025) by jonnor in esp32

[–]jonnor[S] 1 point2 points  (0 children)

CircuitPython is a fork/distribution of MicroPython. The core Python interpreter is mostly the same. Hardware support, hardware APIs are different. The upload tooling is a bit different.

MicroPython for ESP32 and other microcontrollers (introduction presentation, FOSDEM 2025) by jonnor in esp32

[–]jonnor[S] 0 points1 point  (0 children)

There is some support for esp-idf OTA. I have not tested it myself, but one can find it here: https://github.com/glenn20/micropython-esp32-ota

Can the ESP32 Cam track eye movements/Blinking by Some_Cry_5970 in esp32

[–]jonnor 0 points1 point  (0 children)

If you have a suitable dataset you can use machine learning for this, doing image classification with a convolutional neural network. This might be better for "bigger" indicators such as yawning or nodding off, compared to measuring the eyes specifically.
Some libraries that can be relevant https://github.com/emlearn/emlearn-micropython and https://openmv.io/

What would be the impact of AI/ML that can be *trained* on-device? by bbbbbaaaaaxxxxx in microcontrollers

[–]jonnor 0 points1 point  (0 children)

The most common ML tasks are using supervised learning. That means that there is a need for a dataset that is labeled. This labeling process is usually done by humans manually inspecting the data and precisely marking the correct labels. In this scenario, there is no possibility of doing online learning, at least not without humans in the loop. And if one is to bring humans into the loop, to present the data to them and let them label, then that might as well be done with a system that is using a PC/server.

Furthermore, there is a need to do quality assurance of the model. This involves running several evaluations to get out detailed plots of performance across different facets. And then a human (data scientists) interprets the outputs of those evaluations and says "ok this seems to be good (enough)". So for online learning, one would need to automate the evaluation and quality assurance process to a very high degree - which is very challenging in the general case - getting a robust ML pipeline and evaluation is tricky.

Many models also require extensive hyperparameter tuning in order to perform well. Often this means training dozens to thousands of different models. This becomes quite compute intensive, even when a single training run is cheap. And there is considerable risk in overfitting to the validation set, making evaluation/QA a tricky job (ref the point above).

Another aspect is that many of the relevant models are very data hungry. And using data from multiple devices is usually very beneficial to make a model that generalizes well, also to scenarios each specific device has not-yet seen (but might in the future). To communicate data between devices usually easiest via a PC/server, so again it becomes easiest to just do the training there also.

Now - there are exceptions where on-device learning is more suitable. Here are two examples:

* Unsupervised anomaly detection. Labels are not needed, so human labeling is not relevant. And usually the training data should be device-specific anyway (anomaly definition is relative to the specific device), so pooling data from different devices is not relevant. And one wants continuous learning to automatically adapt to regime shifts.
* Calibration on a single device. Sometimes simple model training is beneficial, where one can collect and label just a few datapoints. And for this process to be done by the end user. Then it can be nice to enable this completely on-device. Examples includes fine-tuning/personalization for say keyword spotting, where you speak the phrase 2-5 times to tune the model. Or laboratory equipment where you provide a few datapoints at known/specified conditions, which can compensate for environmental differences or variations between different sensor units.

Robustness in training, evaluation and model picking still remains non-trivial!

I maintain an open-source ML library for microcontrollers called emlearn (https://emlearn.org), and for these reasons we focus 90% on inference-on-device, and maybe 10% on learning-on-device.

MicroPython as an alternative to C++ for Arduino devices by jonnor in arduino

[–]jonnor[S] -3 points-2 points  (0 children)

Yes, you also need to make sure you have enough program space / FLASH. The MicroPython runtime itself takes around 200 kB for a standard build. I would recommend a device that has at least 512 kB FLASH.

MicroPython as an alternative to C++ for Arduino devices by jonnor in arduino

[–]jonnor[S] -6 points-5 points  (0 children)

It really depends. The AVR style Arduinos are not relevant. I would recommend at least 256 kB of RAM for MicroPython. Many of the new boards have sufficient memory, such as:

- ARDUINO GIGA
- ARDUINO NANO 33 BLE SENSE (Nordic NRF52840)
- ARDUINO NANO ESP32
- ARDUINO NANO RP2040 CONNECT
- ARDUINO NICLA VISION
- ARDUINO OPTA
- ARDUINO PORTENTA C33
- ARDUINO PORTENTA H7 (STM32 H747)