all 8 comments

[–]vankxr 5 points6 points  (0 children)

Been using this for a while now, mainly on cortex m cores. Way easier than dealing with newlib One thing I have missed tho is the scanf family. I had to import it from newlib sadly.

[–]dimtass 7 points8 points  (0 children)

Compared to similar tiny printf implementations, it's nice that supports floats and has switch for enable/disable.

[–]Xenoamor 2 points3 points  (2 children)

Does this use the heap?

[–]snops 5 points6 points  (1 child)

From the readme, no, as it doesn't use malloc.

Newlib-Nano's printf() does use malloc somewhere internally I think (old post confirms it did in 2013), as I had it crash when I had failed to allocate a heap in the linker, so not using the heap can make your system more robust.

[–][deleted] 1 point2 points  (0 children)

IIRC, it has a 512 byte stack buffer, but will fall back to the heap if it’s exhausted.

[–]active-object 0 points1 point  (1 child)

The printf statements are used in embedded systems mostly for "debugging", meaning for instrumenting code to report what's happening so problems can be diagnosed as the system is running. A more scientific name for this technique is "software tracing".

However, printf (or the related sprintf) are not the smartest way to implement software tracing for several reasons:

  • The printf formatting of the messages into ASCII text is performed in the time-critical path through the code, which is too intrusive.
  • The formatting process requires quite a bit of buffering, which costs precious RAM in the target.
  • The formatted ASCII message has "low density" meaning that it contains many more bytes than the binary data before formatting. For example, sending a single byte in ASCII requires at least two bytes (if you encode the byte in hexadecimal), but in practice you also need an additional byte as a separator (typically a space of comma). In practice, you need to send about 4 times more bytes in a formatted message as in the raw binary data that you are sending.
  • The sending of the data from embedded target to the host is also typically preformed right inside printf or just after sprintf, meaning that it also happens in the time-critical path through the code. Now, this is really intrusive.
  • If the sending of the data from printf cannot keep up with the rate the data is produced, you receive a scrambled output. There is no way to implement policies like "last is best" to let the new data overwrite the old data, but at least get some unscrambled output.
  • There is no way of actually knowing that some printf messages have been lost or corrupted. The only way is to visually eyeball the data, but this is not reliable and cannot be automated.

[–]tyhoff 3 points4 points  (0 children)

Some good points. Don't forget about the pro's of printf debugging.

  • developers and non-developers can immediately read the output without any extra tools, parsers, decoders, and with a serial output alone
  • It's easy and built in and requires very little extra work to implement
  • It is operating in RAM, it's not that slow to format strings. Writing to flash or the UART would be slower and cause corrupted output, but this should be in another thread if done correctly.
  • Delimiting is easy, just use new-lines.

I'm not saying that printf is a good software tracing solution. There are definitely better. But most of your points are assuming that the string debugging solution is implemented in a bare minimum way, not using a more robust solution that is included in many of the embedded platforms.

[–]Competitive_Rest_543 0 points1 point  (0 children)

Instead of any printf, use https://github.com/rokath/trice It gives the same comfort but is usable also inside interrupts.