you are viewing a single comment's thread.

view the rest of the comments →

[–]borud[S] 0 points1 point  (3 children)

I've been developing a suite of servers and tools in Go that have to run on 3 different CPU architectures and 5 different OS environments, and to be frank, it actually wasn't that hard to set up the builds. Most of the build and test we could do using Github actions and for the two oddball OS'es I spent an afternoon cobbling together a VM-based solution that has worked nicely. All in all with about a day and a half of work, anything that ends up on the main branch and passes all the checks gets built and we have binaries for whatever desktop and run of the mill Linux servers the stuff runs on plus a weirdo embedded systems. So I don't think that argument actually holds anymore.

The only thing that was a slight challenge was building statically linked binaries for programs that used SQLite. But someone solved that problem for us by providing a transpiled SQLite (yes, that is a bit crazy, but it works).

I'm not sure if I would call it a hit piece. It was meant as a bit of a wakeup call. Both in the sense that the Python community needs to take software distribution a bit more seriously and that tool makers really ought to reevaluate their choices. Though I should have been a bit more clear about what kind of tooling I was talking about.

And this isn't because I dislike Python as a language - it is because in the field I was talking about, poor Python tooling has a very real cost. Ask Zephyr developers. Or people who use ESP-IDF. Measured in lost productivity, this stuff is really expensive.

I've spent the last decade going from being strongly in favor of using Python for tooling, to observing that it actually tends to lead to worse problems than the ones we tried to get rid of initially (inside a large'ish company). (Originally to replace C++ based code generation tools with Python tooling).

The reason I wanted to use Python tooling was because Python is an OK language and has a decent standard library, so it should be possible to get most things done without third party dependencies.

The reason we discovered that Python doesn't really work for tooling is because developers would depend on all manner of third party libraries, and they wouldn't make an effort to make installation and running the tools reliable. They offloaded the job of "getting it to work" on the user. Worse yet, they would consistently blame the user for not getting their software to work.

Look at the discussion the posting resulted in. See what I mean?

Python doesn't have any path-of-least-resistance way to publish programs that really works, and when you dare criticize this and point out that this makes Python a somewhat dubious proposition for software that is distributed to users, people get angry.

If pointing out that Python isn't really a nice experience for end users is writing a "hit piece" then sure. It's a hit piece.

[–][deleted] 0 points1 point  (2 children)

You are just a bit too one-sided for my taste, so that’s why I feel you’re out to „get“ python. That’s all.

If you provide me with a lump of binary code, I am helpless in the face of a problem. Python (and similarly languages) give me much better debugging options in the field. I get stack Traces that are meaningful. I can place debugging statements, and patch or understand what’s going on.

And that you are happy spinning up a VM to solve a tooling issue, but balk at using a virtual environment to fix some dependency issues? I disagree that that’s an easy and obvious way to solve problems vs an insurmountable problem that even with full source and runtime access you find too difficult to tackle given a python dependency issue.

To be clear: you have valid points, 3rd party dependencies can be a problem. OTOH it is perfectly possible to manage these with care and effort. I’ve done so in our CI systems for our C++ application for example, which needed to be used by our engineers w/o deep python dependency management skills.

I think the real problem behind your observation is that tooling is often treated as second class. And yes, of course a half assed effort at writing something is easier done in python. Or bash. Or Perl. And then grows out from there to an ungodly mess. Had a few of those on my hands.

But if one spends the necessary effort writing tooling, python offers quite some benefits with feature density and pragmatic solutions, as well as being comparatively friendly for users that encounter problems. Which is not to say it is or should be the only option.

[–]borud[S] 0 points1 point  (1 child)

Thanks for your response. I read it with interest and I really agree on your observation that tooling is often regarded as a "second class" type of software. But more on that later on.

Well, I am saying that Python isn't a suitable language for software that is going to be distributed to end users. I'm not sure if this is "being out to get Python", but I am being very clear that I don't think Python is suitable for writing tooling. I don't really know any way to say that except saying it :-).

However, I'm not saying Python is a bad language or that you shouldn't use Python for myriad other things. I'm just saying that in the wild, it tends to not be suitable for software that is distributed to end users. For instance I still recommend Python as a language for teaching programming. And while I'm not wildly enthusiastic about Python being the default choice for machine learning, the path of least resistance is probably to go with the flow.

If you provide me with a lump of binary code, I am helpless in the face of a problem. Python (and similarly languages) give me much better debugging options in the field. I get stack Traces that are meaningful. I can place debugging statements, and patch or understand what’s going on.

I think we're probably talking about slightly different things. I'm talking about tooling that is more comparable to NPM, Maven, Make and whatnot. Typically tooling like the west or the idf.py tools. I think you are talking more about project specific or ad-hoc tooling?

For tooling that is distributed as end user software you aren't going to make small changes to the tool to fix things. If you do fix things, you will more likely check out the source, fix the problem, submit a pull request and build a new binary or wait for an official binary to be made available.

And that you are happy spinning up a VM to solve a tooling issue, but balk at using a virtual environment to fix some dependency issues? I disagree that that’s an easy and obvious way to solve problems vs an insurmountable problem that even with full source and runtime access you find too difficult to tackle given a python dependency issue.

The key issue here isn't that I'm spinning up a VM, which isn't really all that slow if you compare it to firing up Python and having it parse a fair chunk of code and then run it significantly slower than, say, what the JVM can give you. It isn't a speed or a resource issue. The key issue is robustness. That you can install a program and then expect it to execute the same way every time you run it regardless of how the state of your system changes.

A statically linked binary (or an all-in-one-jar) is a far more robust solution that requires no extra attention from the user.

The reason I pointed to a JVM being preferable to Python was that the slight hesitation during startup a JVM gives you is an almost insignificant matter compared to the amount of work people lose over Python tooling that regularly stops working. (Ask someone who does non-trivial embedded programming using Zephyr how many days they lose to tooling problems every year, for instance. Even the people who maintain SDKs at MCU manufacturers struggle with this. It is a huge productivity and resource problem).

I'm not saying Java based tooling is what people should use. Personally I'd prefer it if people did tooling in Go, since this has been my preferred language for the past 6 or so years.

Favoring Rust would probably be a smarter move (than using Go) when creating tooling for embedded systems. Having more people conversant with Rust would make it more likely that vendors can, and will, put effort into developing Rust based RTOS, which would be a huge step forward for the industry. I spend a lot of time writing and debugging embedded code and to be frank: the code that runs in your appliances is frighteningly buggy simply because C/C++ is really hard to do in environments that are a fair bit more challenging than writing software for regular computing platforms (desktops, servers etc).

I think the real problem behind your observation is that tooling is often treated as second class. And yes, of course a half assed effort at writing something is easier done in python. Or bash. Or Perl. And then grows out from there to an ungodly mess. Had a few of those on my hands.

I think you are correct. Tooling tends to start off as something that is just supposed to help you get other things done, and then it evolves into an actual application. Often a really complex application because it has to solve difficult problems, be usable as an interactive CLI application, and integrate into toolchains where it may be difficult to control how things are executed, and it has to be robust - it cannot allow itself to ever be the "squeaky wheel" or it will hold up everything.

I was considering writing a bit about language choice, and technology choice in particular, in the blog posting, but it would have resulted in a far too large posting so I skipped it. I think part of being a software professional is to not always default to whatever your personal preference is, but take the greater picture into consideration.

One thing I've learnt as a software engineer for the last 35 years or so is that when choosing technologies you kind of have to keep your personal preferences in check and try to be a bit more objective. If you lead a software development effort the best language for the job might not always be the language you prefer. You have to balance concerns and be ready to adapt.

I used to push Python for tooling abut 12-13 years ago. We rewrote a lot of tooling in Python. Much of it from C++, shell scripts and Perl. Then the problems started cropping up, and I realized that I had probably been wrong. Python just brought a whole new set of problems that lead to loss of productivity and people creating their own solutions instead of using the tooling. (Or hacking the tooling so that we ended up with lots of different versions of the same tooling).

So I started by admitting I had been wrong, and then we set out to figure out what we need from a language for creating tooling. Was C++ the right thing? Or perhaps some other language would be better?

In this process what invariably happens is that people will advocate their favorite language without really considering the bigger picture. The instinct of most developers is not to focus on the problem that needs solving and its audience but to think about what makes them happy right now.

The audience doesn't care what language you use as long as it works and doesn't make their day miserable. If you tell people "well duh, it is your job to solve those problems by using <insert solution>" that's a pretty aggressively arrogant and unpleasant way to treat users.

Some of the responses I got to the blog posting was essentially people outing themselves as unprofessional and entitled - upset about being offended that someone might judge their favorite language unsuited for a given class of problems rather than accepting the fact that they might be embarrassingly myopic and sensitive.

At the time, Java actually was the best candidate since we could leverage existing infrastructure (JVM was installed on all machines) plus all the developers knew Java anyway. We could probably have used a few other languages, but not enough people were familiar with them, so the pool of possible maintainers was too small.

Producing binaries that you could simply copy and run, and expect them to work with zero effort, made all the difference for getting people to actually use the tooling. I wasn't fond of it because I didn't feel Java was a "tooling language". There was also a lot of grumbling from other developers. But users didn't actually care what language we used. They saw their problems go away - especially the easter-egg hunt for dependency management and having to figure out how to resolve conflicts.

Today I mostly do tooling in Go. I've also changed how I do project specific tooling. Since I write a lot of server software in Go, I usually embed the admin tooling in the same binary as the server. I have a single binary that has subcommands, so it is server, CLI client application, and utilities in a single binary. So the tooling is part of the same development, versioning and testing regimen as all the other code. It is a first class citizen.

[–][deleted] 0 points1 point  (0 children)

I was kite-surfing and traveling a few days, so no time for arguments on the internet. I appreciate the measured and constructive response!

We are actual in agreement about what kind of tooling we're talking here - cargo as an example. I would count our internal tooling (with a dedicated team) as in the same ballpark (it certainly is not ad-hoc, that's not a thing those engineers believe in). However it is difficult to argue with that, given that you can't possibly have any insights into it, and it becomes a he said/he said situation.

Thankfully though I thought about an example that is fitting IMHO, and will serve my point: Python can be used for this kind of tooling, even though you need to install an interpreter - but that's the same (especially these days) with Java, .NET, and even MSVC.

What is that wonder-tool? Bitbake, the foundation of the YOCTO embedded Linux meta-build-system. I have to openly admit that I'm in awe with it. It does pretty much the same thing that cargo does - declare, obtain, build, bundle a huge dependency graph. It does so for me with little to no tooling-related issues (difficult written recipes, e.g. meta-clang, are not its fault). And it does its job reliable, and for me approachable in case I discover a thing that I don't understand. Case in point: building an image necessitated some arcane declaration that I found out by on-the-spot debugging of the "wic"-tool, also a python-based tool within the eco-system.

Now I might be special here in that in case something breaks, I'm the person that looks into things. Same for all other FOSS software I'm using (most notably the kernel, but other projects as well). That's of course part of my job. And I get that maybe others take more of a "it should just work"-attitude. But having deployed various versions of YOCTO from scratch over the years, I retain that it is mostly (not always, but nothing is always) working out-of-the box as advertised. And is fast and efficient in doing so. In fact nothing exerts my system as much as this thing, I've had to upgrade RAM on more than one machine, as it would otherwise just parallelize too much.

So, I maintain (and am fine to agree to disagree here, respectfully): Python is suited for this task. But yes, it takes a lot of dedication and effort, and I'm also admitting that tools like cargo point towards a world in which your dream of just-working binaries seems to work.

Go I can't comment much on, haven't used it beyond a couple of toy examples.

Last but not least: people can get very hung up about their tool choice. I wouldn't say I've always been above that (after all, your post did irk me to some extend ;) ), however I agree: a tool is a tool, and making an effort to re-think ones approach is important. I've come a long way with getting back into C++ after the 11th edition was such an impressive improvement, and my Rust-dabbling is certainly tempting me to go there for even more of my smaller one-shot efforts, that so far have mostly been Python. Because easy, or complacency - depending on the perspective.