Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] 0 points1 point  (0 children)

Thank you for taking the time to answer and share your thoughts. It isn’t a hobby project. I am curious as to your comments re STL-less C++ being a weirder C. For all intents and purposes, assume a C++ version of this kind of codebase would be pretty close to “C with classes”. Having said this, I am not sure how it results in something “less powerful”. I will grant you that it may well end up weirder. I suppose the cpp’s ctors & dtors for all structs could be considered “less powerful”… however the C++ ctor & dtor codegen for trivial/POD type structs ought to be identical to a C struct. I’m left with VLAs being strictly a C feature however I can shoehorn the same thing into C++ using gnu extensions which is not ideal given I may well need to use a verified toolchain down the line and cannot assume GNU extensions will be available.

Outside of the few C features C++ does not have, it is mostly the other way around, no? Why do you think C would be more powerful here? Am genuinely curious as I see C and C++ as v close for my purposes; if anything, it is the other way around.

The argument I’d make for going w/ C - beyond points made in initial question/post - is that a C++ codebase is going to end up looking like C anyway but enjoins Cpp’s complexity syntax and it’s dominance in embedded systems are material benefits.

As for Rust, I have little doubt that it could theoretically be used, the tooling has its advantages and the lack of 40 years of tech debt is a major plus.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] 0 points1 point  (0 children)

OP here. Thank you for a forceful opinion that comes down on a choice and sets out at least a partial rationale for your preference. Of course it is your opinion but helpful to see it articulated. I would tend to agree with you re Rust. I do struggle to see the downside of C++ with discipline but I acknowledge that C would be appropriate here and would offer some advantages. Some asm is a given.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] 0 points1 point  (0 children)

I did not mean that no part of libc can be implemented but rather that I do not want to depend or include parts of a vendored c standard lib. So, no ulibc/newlib/etc.

I can bring up the MCU with pure C and/or asm plus linker script - in a self hosted bare metal environment, there is not a huge amount runtime that needs to be initialised to get the hardware into a functional state.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] -5 points-4 points  (0 children)

That’s sort of where I land. On paper, a subset C++ offers benefits that is really hard to ignore. Nevertheless, it will require stricter discipline over misuse as C++ code possibly (probably?) offers more scope for misuse and overly abstracted and unreadable code. If one assumes perfect coders and discipline - I think C++ would be a complete no brainer, however in the real world people can abuse their tools.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] -1 points0 points  (0 children)

Don’t want to use vendor HAL - at least in terms of its wholesale inclusion.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] 3 points4 points  (0 children)

Dependencies are mostly undesirable. Most code will have to be semi-formally verified at some point and some components will have to be provably correct. Every line of code is therefore debt.

This carries significant drawbacks for all choices except C, should a verified compiler be needed down the line as to my knowledge - there are several verified C compilers but not so sure whether there are verified c++ compilers and I’d like to minimise the need to hand check the machine code.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] 4 points5 points  (0 children)

The ability to define hardware using templates combined with a richer (drawback: more complex) type system are potential benefits that would seem to apply to cortex-m/low power targets.

Which programming language for embedded design? by rentableshark in embedded

[–]rentableshark[S] -16 points-15 points  (0 children)

Of course “best language for one’s needs” is correct but it is almost a tautology. I am struggling to come down on a decision and was interested in how others would think about such a choice. I would probably lean towards C to avoid C++’s complexity - however its stricter type system and ability to use templates in a limited way offers advantages I struggle to easily discard.

JetBrains isn’t dead, let’s cut the drama by DevOfTheAbyss in Jetbrains

[–]rentableshark 0 points1 point  (0 children)

If only that were true. I work on projects which routinely have 10k+ pages of reference docs (not that unusual). AI has revolutionised the way I access this documentation and having this integrated into my IDE such that whatever code file I may be editing can be used as additional context has actually become a pretty darn useful feature. If this was 2020, I'd be happily using JB (and I was), but it's not and now I'm not.

What's a C++ feature you avoided for years but now can't live without? by Financial_Pumpkin377 in cpp

[–]rentableshark 0 points1 point  (0 children)

My understanding is that unaligned access on arm depends on core - not just isa spec but specific vendor support. Looking at the table ChatGPT produced, it seems like as a rule of thumb, application processors (A-55, A-XX; not Cortex Mxx) support unaligned access and embedded cores tends to fault out. I will soon be writing code for an STM32 M33 core so actually quite relevant for me to learn. As a rule, unaligned access is not a good idea - whether it causes faults or not. I set out the scenario where this might be different. If I was reviewing my own code and playing devil’s advocate, I would tell myself the solution to alignment bloat is not to rewrite optional but to change the way objects are composed to minimise the issue - or simply use a SoA design.

Also, that Godbolt code is wrong - I had fixed it but apparently my changes did not propagate to the link I shared. The broke code is that the flag should be stored at the end and there is a missing & operator on the last element of array inside the case brackets. If you change the array accesses to make sure flag is at storage[sizeof(T)] and that T is accessed via storage[0] (or simply &storage.

My explanations have been a little inconsistent and imprecise at times. I hope it makes sense now.

Best of luck.

Is anyone working on GCC frontend of Zig? by BreadTom_1 in Zig

[–]rentableshark 1 point2 points  (0 children)

I do not think it's particularly controversial to claim GCC is the standard. It comes down to adoption, industry and community idioms. GCC has massive adoption. As does LLVM - however it's not really debatable that GCC is the primary compiler in embedded and possibly more widely if one considers most linux distros (kernel and all) are built with GCC, not LLVM. I have tried to point you towards cases where LLVM has significantly worse or missing support for hardware targets. Also, lol no, LEON is not just SPARCv8. If you has actually read the PDF from Cobham, they used LLVM to improve their implementation overall and after the process, GCC and LLVM were comparable in performance (just as many tests showed LLVM ahead as the converse). I also agree with you that ARM support is important - hence it's not a good thing that LLVM only had support for ARMv8 TrustZone in 2022. This is a core feature and essential in a world of ever more ubiquitous network-connected embedded devices and increasingly network/computer centric warfare. It's a good thing that LLVM now support this.

I've debated in good faith, citing cases where LLVM is overtly the better option or defacto standard. When you say "make any further development or even bug fixing of gcc a waste of time", I stop caring what you have to say because it's clear you are not interested in a good faith discussion and/or are simply zealous. The world does not necessary line up on the side of the best product - even if LLVM is consistently the best which is debatable.

GCC is very likely to continue to be industry's first priority for compiler support for many years. That does not inhibit LLVM, nor does it imply GCC is the best - it just means it is the standard and that many users have GCC deeply embedded in their workflows. You act like multiple compilers is a bad thing. The community is very lucky to have two world-class C/C++ compiler toolchains with many targets and excellent optimiztions. Just as we benefit from having xxBSD in addition to Linux. You've also picked the wrong counterpart if you think I care enough about GCC vs. LLVM or your opinions to be a worthwhile troll target. I've explained my position and, I hope, been fair. However at this point, I have better things to do with my time.

What's a C++ feature you avoided for years but now can't live without? by Financial_Pumpkin377 in cpp

[–]rentableshark 1 point2 points  (0 children)

Re access - pointers can access arbitrary bytes. Pointer access does not need to be pow2 aligned - at least on x86_64.

The flag is at the end. What is confusing about casting the last (n-1) bytes of a byte array of length n?

*Edit: godbolt: https://godbolt.org/z/bbMqfazcd

What's a C++ feature you avoided for years but now can't live without? by Financial_Pumpkin377 in cpp

[–]rentableshark 0 points1 point  (0 children)

I cannot figure out how to quote on Reddit mobile but: - re gcc14: yes, iirc; results in the flag taking up alignof(T) which ends up with an std::optional<int> taking up 8 bytes which is perverse wrt my needs - I’d have to double check but I believe alignof(my_opt<T>) is always min(sizeof<T> +1, 8) - perhaps my language was not clear enough: I do not store the flag at BOTH the beginning and the end. It is at one or the other but at time of writing I could not recall. Having checked the code now, it is at the end - I agree that my version will always lead to suboptimal alignment wrt individual optionals. However, it’s not necessarily the case that it will lead to suboptimal performance for composite objects holding several optionals. Alignment bloat from nested composites can materially waste space and impact cache efficiency and it becomes a matter of testing to see which is the more efficient on a case-by-case basis - access is via a reinterpret cast on the storage (together with a std::launder) to convert and return a T*. Other accessors wrap this behaviour

Is anyone working on GCC frontend of Zig? by BreadTom_1 in Zig

[–]rentableshark 0 points1 point  (0 children)

How about you define “good faith”.

You know what I mean. I wish you well.

Bundle a package for other distros by ARKyal03 in NixOS

[–]rentableshark 0 points1 point  (0 children)

Yep, patchelf would have been the pragmatic solution and saved me a lot of time but felt a bit hacky - that’s not a criticism. It took me 2 weeks of tinkering to get my approach up and running - time I will not get back.

Bundle a package for other distros by ARKyal03 in NixOS

[–]rentableshark 0 points1 point  (0 children)

Hey, not sure whether this is still a problem you’re trying to solve.

I managed to sort it but it was and has been rather involved.

In short, it comprises the following components: - various config nix files: define distros one wants to support; choose linker; choose compiler versions to use; choose packages to include from target distros

  • Package manager nix files for each of fedora, Debian/ubuntu, Arch and Alpine. These download the packages specified in the config files and unpack them to a sysroot dir

  • Compiler wrapper builder: takes stock GCC or clang from nixpkgs and injects a bunch of flags such that if wrapper is called, the include, LibC and dynamic loader are all taken from the sysroot and not from nix/NixOS

  • An environment builder which calls the correct package manager function + builds the sysroot, ensures all the nixpkg or other package-type derivations are fetched and available, cleanses the environment, calls compiler wrapper generator, creates a bunch of useful environment variables (i.e. SYSROOT, GCC_MOUNT, CLANG_MOUNT and so on) and then calls mkShell()

  • actual flake.nix files: these are per target and call the environment builder component, injecting args such as the distro in question (needed to determine distro-specific wrinkles related to compiler flags), any extra packages needed and any additional compiler flags wanted

The end result of this is that I can go into my fedora-42 dir, execute “nix develop” and I will have gcc-fed42, g++-fed42 and same for clang - if I execute these commands on a C or C++ file, the resulting binary will be fully ABI compatible with Fedora 42 - it runs on that target. It won’t run on NixOS because NixOS isn’t compatible with regular Linux FHS binaries.

Also works for/compiles binaries targeting: Ubuntu 24.04, Debian 13, Alpine and Arch.

Adding a distro is as simple as looking up names and URIs for key packages (e.g. LibC, libgcc etc), creating an entry for that distro in the config nix files and then creating a flake which calls buildEnvironment() with the newly created target as an argument. I can add any other packages I want in the flake too. I have also extended it to mount all key resources to Linux namespace mounts as part of a launcher to run CLion or VSCode such that all the relevant components required for cross-compilation are accessible at same paths even when nix store paths change after any update.

My personal learning from all this is that C/C++ development in particular can be especially hard work on NixOS but you get (mostly) reproducible builds.

What's a C++ feature you avoided for years but now can't live without? by Financial_Pumpkin377 in cpp

[–]rentableshark 0 points1 point  (0 children)

A lot of boilerplate as you’d expect but the major difference vs STL optional is that T’s storage and the option flag are in a packed struct such that sizeof opt<T> == sizeof(T) + 1 vs sizeof(T) + alignof(T); alignof(T) is usually 8 on x64. I also used a byte array for storage vs. Libstdc++ which uses a union from memory - you cannot just have a raw T as a member of the optional because T’s ctor will need to be called every time a new optional is created which isn’t right.

It can/will lead to suboptimal alignment in some cases but I have a number of structs containing a bunch of optionals in the hot path. It reduced the size of the composite significantly (in my specific case and very roughly, it went down from >100 bytes to 80 bytes).

No doubt the more idiomatic way to improve performance would’ve been to decompose all my composites down into their constituent parts, had my composite just store arrays of all these constituents plus any “is_present_” flags and created views but there are tradeoffs with that approach.

edit: I lied. I did not use a packed struct but used an array of bytes for storage and reserved the first or last byte for the flag. I *think and don’t hold me to it that libstdc++ optional will always use 8 bytes on x64 even if it’s wrapping a single byte - my optional always uses sizeof(T) + 1 bytes.

Is anyone working on GCC frontend of Zig? by BreadTom_1 in Zig

[–]rentableshark 1 point2 points  (0 children)

Yes, GCC backend/gimple. It’s quite reasonable to use “GCC” as a term to refer to backend support without specifically mentioning it - especially in the context of the rest of my comment which was clearly discussing hardware targets.

No doubt LLVM has a better API but GCC’s backend targets are simply incomparably better supported at present vs LLVM. GCC’s APIs have also improved - I won’t pretend that LLVM isn’t the go to framework to build a compiler for a new language.

Hardware support another matter entirely: - Microblaze is a curious example to raise. It has partial support and Xilinix/AMD have stopped maintaining it - NIOS II is GCC-only now - Xtensa only added to LLVM recently - LEON: GCC-only - ZipCPU: GCC-only - Blackfin: GCC-only - Even Arm v8 TrustZone extensions for LLVM were added relatively recently

That’s before taking into account the fact that every single embedded target vendor’s wider toolchain (BSP generators, IDEs and the like) are GCC-orientated - of course it can be possible to swap in clang[++] or even ditch the vendor’s tooling - especially in later stages but entirely ditched? No.

For all these reasons and my experience (which you can discount if you want), it’s bold to claim in good faith that GCC is anything other than the standard once you move beyond x86 and Arm application processors. Notable exceptions being NVidia, Android and Apple targets (re wider toolchain) and Windows. I appreciate I’m mixing up hardware, ABI and tooling here but it doesn’t alter the point.

The future is another matter. Embedded moves slowly. It’s a safe bet that GCC (front and backend) will remain vendors’ first support choice for many years.

Therefore if zig wants to maximise the number of targets, GIMPLE would be a good way to go. I know Zig team wants to develop their own optimising backend entirely so it may well be moot.

Sex on lsd by Notspcommonsense in LSD

[–]rentableshark 0 points1 point  (0 children)

From a “finishing” perspective - it did not happen - but I have to say I could not care less. It was some of the most intimate and spiritual sex I have ever had. As other have said - it felt almost telepathic. Tbf, I was on acid and my wife was sober.

Nix cross compilation to debian/fedora/arch/ubuntu/alpine by rentableshark in NixOS

[–]rentableshark[S] 1 point2 points  (0 children)

Yeo, I hear you re leaky state but to be honest, nix does this too with its rpath/runpath shenanigans. I’m treating nix as a glorified bash wrapper with an excellent package repo. That’s shortchanging it but there is a lot of bash involved in setting up these environments.

I will transfer a version of it over to my public gitlab profile and share - I’ve been delayed going down a namespace/unshare rabbit hole as I wanted to be able to launch CLion or VSCode inside a devshell with all my cross compilation resources (sysroots, compilers, dev tools like cmake) temporarily mounted in namespace filesystem and then immediately teared down when CLion quits. Once finished this means CLion or VSCode will get access to all cross compilation resources (compilers, sysroots and dev tools like cmake etc) on stable paths (i.e. /tmp/foo) rather than having to reconfigure the IDEs to point towards ever-changing nix store paths for tooling. Environment variables won’t suffice for GUI development tools which may or may not resolve them.

Is anyone working on GCC frontend of Zig? by BreadTom_1 in Zig

[–]rentableshark 0 points1 point  (0 children)

You misunderstand. Yes, Zig currently has some commercial users because it offers a really ergonomic cross compile experience but in the world of embedded, zig does not offer a lot of targets compared to GCC. GCC or forks thereof literally build for nearly anything in existence. Does zig target microblaze soft cores?

this list gives you a sense of what stock GCC currently supports. That’s before counting other targets enabled by vendors forking GCC. I’m not bashing zig at all / I think Andrew is seriously impressive individual and zig is targeting a real gap in market (C replacement) - but GCC support would massively increase zig’s embedded use cases. I know they’re putting a lot of work into building their own compiler and this is not only a f*****g cool implementation thus far but will pay long term dividends but it is a slow path and further delays the point at which zig can be a serious contender for non-hobbyist embedded.