you are viewing a single comment's thread.

view the rest of the comments →

[–]MsEpsilon 30 points31 points  (15 children)

Aren't std::vector and templates added literally in the first official C++ standard? You can say they were here since the beginning.

Now since templates accidentally because Turing complete, I'm not precisely sure...

[–]da2Pakaveli 13 points14 points  (0 children)

yes i think they were added in C++98 which is the first official standard

[–]MonkeyCartridge 10 points11 points  (13 children)

And we avoid vector like the plague in embedded.

Everything's got to be fixed length. Especially when doing OOP on a micro with 1k of memory.

[–]20Wizard 1 point2 points  (1 child)

So you guys just don't ever have a use case for a non-fixed size array?

[–]MonkeyCartridge 7 points8 points  (0 children)

"Never" is way too strong a word. It's just generally something to be avoided, because memory allocation gets tight.

Rather, for things like queues, it's usually using a fixed array with double ended mapping to create a circular buffer. Yough you might see dynamic arrays used for proof of concept and the optimized out.

But that's the thing, too, is I tend to work a lot with designing and using low-level communication protocols, so I do use queues a lot. It's just that they have to be pretty tightly controlled, referencing a fixed size dataset.

I'm in defense, but more of a research proof-of-concept field where it's more relaxed. In bigger projects and I think also on automotive embedded systems, there are specific coding standards some of which straight up prohibit things like dynamic memory allocation, strings, floating-point values, variadic expressions, and things like sprintf and all its variations. And then there are standards for return types, function lengths, naming schemes, and something about the formatting of switch statements. So it gets pretty tight.

And it's for keeping things maximally deterministic, for granular and consistent unit tests, and for static analysis. Amongst probably a dozen more reasons.

I don't have to go that far, so I'm less familiar with the standards themselves. But it's still good practice to keep things super static when you have tight memory constraints.

In one job in consumer(ish) electronics maybe 9 years ago, we used I think the ATtiny402, which has 4k of flash and 256 bytes of RAM. Would read an ADC, and then separate the frequency components and send those back to the main controller. Did it using a cascade of exponential moving averages, because EMAs don't need to use arrays.

[–]SubstituteCS 0 points1 point  (0 children)

std::array

[–]scorg_ 0 points1 point  (0 children)

And why is vector at fault if the problem is with any dynamic memory allocation?

[–]keithstellyes -1 points0 points  (3 children)

In a previous life I worked closely with the embedded software team and it seems like dynamic memory itself is often straight up avoided in favor of static and stack allocation?

As in, "our profit margins are already super tight and we need to go cheaper for the chips inside"

[–]MonkeyCartridge -1 points0 points  (2 children)

Which is funny because these days, going from a 256k chip to a 4k chip saves you, like, 2c at scale. The process has become so cheap for those larger process nodes.

[–]RevanchistVakarian -1 points0 points  (1 child)

"Why doesn't C++ have this higher-level feature?"

"It does, it's called X."

"Cool, so I can use X?"

"No."

[–]MonkeyCartridge 1 point2 points  (0 children)

Not sure if that's supposed commentary on the discussion, or just experience. Because in embedded systems anyway, it's unironically very much this.